Jump to content
IGNORED

Paid internet trolls


may be rude

Recommended Posts

The phenomenon of propaganda in the information age is pretty interesting.

 

After the 2016 election there were stories about "fake news," referring to Facebook ads being used to spread disinformation, among other delivery mechanisms. Then Trump co-opted the term in his attack on journalists.

 

Paid trolls in Russia make good money simply posting online in comment sections or forums.

 

With news about the FBI indicting individuals associated with the famous Russian "troll farm," the Internet Research Agency, the topic is back in the news.

 

Good article from CBC last week. 

 

 

When St. Petersburg journalism grad Vitaly Bespalov answered an online ad for a writer in 2014, he thought the gig at Russia's Internet Research Agency might help his fledgling career.

 
As he quickly learned, what he really signed up for was a job as a paid internet troll.
 
"They pose as people who are not really them," he told CBC News at his apartment in St. Petersburg. "By the second or third day, it was clear where I had landed and what this was actually."
 
Last month, U.S. special prosecutor Robert Mueller indicted 13 Russian nationals who worked at the so-called "troll factory" in St. Petersburg, accusing them of interfering in the 2016 U.S. election. The allegations include fabricating news and using false identities to sow discord in the United States ahead of the vote.
 
A Facebook posting by a group called Being Patriotic is shown above. A U.S. federal grand jury indictment says the Facebook group was created by Russians who promoted and organized two political rallies in New York, including one on July 23, 2016, called Down With Hillary! (Jon Elswick/Associated Press)
 
Bespalov left long before that period — after just three months on the job.
 
"It was really bothering me what I was doing. I knew I had stayed to get more information [on the operation] but this feeling of disgust stayed with me."
 
He says he's sharing his story now with the hope that it makes people more aware of how the "fake news" business works and in the hope that the operation will be shut down.
 
Bespalov says he was one of roughly 200 employees at the nondescript, low-rise St. Petersburg office building, far removed from the dazzling palaces of the czars that are the city's major tourist attractions.
 
He says he worked on a floor devoted to trash-talking Ukraine.
 
"I had to find 20 articles from Ukraine and rewrite them with the same tone as they would be written by our mass media."
 
Russian state media routinely denies its direct involvement in the conflict while denigrating those who support Ukraine's government.
 
In 2014, Ukrainian protesters helped overthrow the country's pro-Russian leader, triggering Russia's annexation of Crimea.
 
Bespalov says any stories the troll factory could produce that made Ukrainian soldiers look bad were encouraged, especially items involving dead children.
 
"One example: we saw a news story that some militiamen were hiding in the school and suddenly it was being shelled. Some children died.
 
"We simply took and wrote that Ukrainian soldiers shot the children and killed them. That's it. No hesitation," Bespalov says.
 
The stories were posted to a fake news site that had a Ukrainian internet address but was secretly run out of the St. Petersburg location, with the idea of making it appear as though many Ukrainians sided with the Russia view of the conflict.
 
The troll facility allegedly had several different areas devoted to different regions of the world. Bespalov says a team on the top floor was dedicated to posting fake news on Facebook sites.   
 
Another group at the troll factory wrote stories and comments for news sites inside Russia. Marat Minidyarov, 30, says he ended up with that group.
 
Minidyarov says in late 2014 he was working at the St. Petersburg youth hostel and met a guest who told him about a place he could make good, quick money.   
 
After a brief interview, Minidyarov says he was hired.
 
"Every morning there was a list and the topics about what you were supposed to write," he told CBC News in St. Petersburg. "You never had your own opinion, you wrote [what] was written there.
 
"Putin is always good, always good, always good," he says. "And Obama was bad. The world was black and white."
 
His job was to write in the comments section of Russian news sites and counter anything negative about the Russian government.
 
"One hundred and thirty-five comments a day. Twenty people a shift. Two shifts day and night. Can you imagine how many comments are coming every day on the internet?"
 
Both former trolls say they were paid well — about $1,000 Canadian a month — all of it in cash.   
 
The Internet Research Agency in St. Petersburg was in business from 2013 until at least 2017 in this building. A security guard told CBC News the building is now '80 per cent empty.' Russian media report the troll operation has moved to a new location. (Corinne Seminoff/CBC)
 
U.S. authorities have indicted Yevgeny Prigozhin and allege he owned and operated the troll farm. Often referred to as Putin's chef, he's the food caterer for the Kremlin and other Russian ministries.
 
Prigozhin denies any connection to the troll operation.
 
The Kremlin's official response to the trolling allegations is that there's no connection between the facility and the Russian government.
 
Neither Minidyarov nor Bespalov say they ever saw Prigozhin — or witnessed any direct link with the Kremlin.
 
"It's hidden," says Minidyarov, "so you can't say for sure. But when I switched on my TV, it was absolutely the same news [on state television]. So why is it the same?"
 
Bespalov says he is certain that however deeply buried that link is, the troll farm was doing the bidding of the Russian government.
 
"[Putin] doesn't see this as problem. In his ideology and view of the world, this is an equivalent step to the so-called 'negative actions' that the West is doing against Russia."
 
Both men say they have been harassed and intimidated.
 
Bespalov was mocked on a Russian news program and portrayed as being a hard-partying opposition supporter. The program played video of him wearing a T-shirt of an opposition candidate and dancing in a nightclub.
 
In late February, Minidyarov says police tracked him down at a friend's apartment and brought him in for questioning over an allegation of making a bomb threat.   
 
He says the allegation was entirely bogus.
 
Bespalov says the troll factory was just beginning to have an impact before he left. Its greater influence came later,  when the English language department was set up.    
 
"In the Western audience, I think they are not used to these black games. They are more naive."
 
In the past, propaganda did not have the technological advantages that it does now. Some are worried about the targeted, surgical capabilities of digital propaganda. Facebook AI allows anyone to place carefully crafted propaganda in front of specifically those who would be most susceptible to it. 
Link to comment
Share on other sites

Paranoid schizo ramble time

 

Once we have Turing Test passable AI chatbots, things will get a million times worse.  People will engage in long arguments with automated agenda-spreading bots who can chat with millions of people at once and never give up.  They will be trained to elicit social emotional responses in certain types of users based on an analysis of their message history which will tell the bot their argumentative style and personal interests.  They may even try to make weak "friendships" with people to strengthen their persuasive abilities.  There is nothing fundamentally stopping this from happening, and it will make us need to completely reshape the way we interact with the internet.  It will "kill" the internet and it may even be a good thing

 

The next issue will be the ability to synthesize 100% realistic video of any person talking, and synthesize their voice as well.  Imagine the propaganda potential here.  Once again there is nothing stopping this from happening, and it WILL happen.

 

We will have to enter a new age of mass understanding of cryptographic identity securing methods.  But is that even enough?  Here's an optimally secured scenario, we'll see if it's even enough:

 

We will require digital signing of all video media spread by trusted news outlets with a chain of digitally signed trust reaching from for instance White House video cameras to news outlet video cameras.  We will need to move heavily towards simple privacy methods like PGP paired with some real world identity-creation method, preferably stored on a blockchain at some point.  For instance, like this:

 

-Make yourself a private key

-Take a picture of yourself holding up a sign saying "I'm Zeffolia and my public key is X, today's date is Y" for your communication with IRL friends

-Alternatively, take a screenshot of an old post saying "I'm Zeffolia and my public key is X, today's date is Y" from a long time ago for your communication in anonymous online communities

-Create a blockchain based "proof of existence" of this identity creation picture (a transaction embedding a hash of the picture in a popular blockchain like BTC, proving this picture existed at the time that block was appended to the blockchain

-Digitally sign all of your messages and emails using your private key you established as your own (through its public key) in the identity creation picture, which you will also attach in your signature on all emails.  Presumably in the future we will have easy browser extensions where you can right click on a picture and just hit the "Verify creation date -> BTC" and it will tell you its blockchain proof-of-existence date

 

Will this help you verify everyone you're talking to online is the person they claim to be?  Yes, with reasonably certainty.  Will this help you verify whether anonymous posters are real human beings?  No, for the following reasons:
-Stolen private keys

-Private key identities created years ago and "matured" via bots by making seemingly legitimate posts on random websites, and only after a few years do they turn into propaganda creators

-Private key identities created by bots through scavenging pictures of people online, before the actual person creates their own identity for themselves

-People will just get lazy and stop caring about verifying the identity of posters since it's impractical

 

Basically there is no solution except increasing public education regarding how this shit works.  Sometimes I talk to me dad about random technology topics and he asks questions like "So who puts together all the searches?" which needless to say is how a lot of old people think, which is expected - the internet and software in general is really hard to comprehend for people who haven't studied how this stuff works.  But this fact will make it really hard to fix this issue

 

Software is so good at what it does that it has just become transparent, basically magic to people.  And its powers are increasing exponentially, and people just can't keep up with what is going to happen.  Things are going to keep getting weirder and weirder and weirder and fucking weird as shit, and I don't think there is a good solution.

Link to comment
Share on other sites

Luckily some people are really skeptical of propaganda shills and even bots.  But unfortunately a lot of people are skeptical in the wrong direction.  Post anything even remotely anti-Trump on /pol/?  You will be called a paid shill even when you're not, even when you're literally posting a fact or a video clip of Trump himself saying something.  So "be skeptical" is not a solution because it clearly isn't working for these people.  Skepticism has to be directed towards the right place for it to be effective.  There's a propensity to think skepticism is a bias-avoidance mechanism, but it's 100% not and it can help reinforce biases because of this very fact - the fact that people can be skeptical and say to themselves "I'm being skeptical instead of taking in everything I'm told" and they think this is them being enlightened but it's really them falling further into their echo chamber of literally untrue bullshit because the skepticism is applied in a biased, directed way.  So it's not a solution

 

What is the solution?  Let's go really further into paranoid schizo bullshit here.  How do we know that at some point in the future, the internet won't be silently co-opted by a government somewhere?  And how do we know that some day Turing Test passable AI chatbots won't get even more advanced so the point where these governments can trivially generate an entire internet for themselves on-demand as their citizens search for things?  A completely isolated internet that looks like it's connected to American news, to Russian news, to the rest of the world - but it's literally a gigantic propaganda machine that is dynamically generated to seem real

 

How do we know this isn't happening right now?  Well right now we're lucky because this technology does not exist, as far as we know.  And we can just travel to other places and see for ourselves.  But future generations will have to deal with things like this, and it's a really scary thought.  How will anyone be able to verify this stuff if there are travel bans or something for seemingly good reasons?  They can't - their governments will be able to fabricate video evidence of the entire rest of the world being in ruins, and their country being the only successful one.  Something like a North Korea situation

 

Don't dismiss this - we already have precedence for this type of stuff happening in NK.  If you think it can't happen elsewhere you're sadly mistaken.  

Link to comment
Share on other sites

This entire issue hinges on one core concept: in some situations, the reality of the situation can be decided in two completely different directions based on information that can fundamentally be hidden.  What if there legitimately was a massive World War, and only Hawaii survived, and they actually decided to stop their citizens from travelling outwards for their own well-being because the population was dwindling and every life mattered - they couldn't risk any radiation exposure and deaths if people ventured outwards to see for themselves?  And all the news said everywhere "the whole world is in ruins, we are all that is left"

 

How do the people living in Hawaii differentiate this from the alternative explanation - that it's all fabricated propaganda?  This is easy to enforce somewhere like Hawaii because it's completely self contained.  It would be harder in a large geographical area, especially one connected to other countries by large borders like the US.  But in Hawaii this would be absolutely trivial to implement once the technology exists.  And people would get paranoid and think "fuck is this all bullshit?  Is this news all literally fake" and there is actually no way for them to verify it without breaking the law in this situation.  This is exactly what is happening in North Korea.  And once the technology matures to allow all media communication to be co-opted and dynamically generated as propaganda, the epistemic situation of "what is truth" and "how much should I trust me government and ISP?" gets fundamentally harder

 

You'd need private keys from a pre-AI, pre-Turing-Test-passable chatbox era, pre-AI-video-synthesis era and you'd need to personally decide to enforce requirements yourselves that the only information you'd accept was that which is signed from this pre-AI era.  And they'd need to be stored in your mind through you memorizing them so they can't be tampered with, or exist in some reliable analog method.  Or you could simply have your own single private key with which you sign all of your stored public keys of your other trusted sources with so you know nobody can tamper with them.  So your one source of truth in the world would become your own private key - the only thing you know you can trust for sure, and you would know that you can trust anything signed with the private key as long as you established the identity of that information source in the pre-AI era

 

Maybe we need to prepare and get our governments ready for the post-AI era, ready with their private keys (heavily over-powered ones using much larger bit lengths).  Or maybe I'm talking bullshit or just don't know what I'm talking about.  Or maybe I'm too paranoid.  But sometimes it's good to be paranoid.  This may not happen in our lifetimes but it WILL happen because nothing is stopping it, and all of the velocity of human achievement is bringing us to that point

Link to comment
Share on other sites

Unrelated but I think the human ability to create and utilize tools, as well as create compartmentalized work routines, will end up with us creating something that is greater than any individual one of us.  I think we're inadvertently creating a situation where we are analogous to individual cells in a larger social super-organism guided by the internet as our brain which coordinates us, and individual cell phones and computers are kind of like nerve cells reaching into the limbs of the super-organism and coordinating it with the other cells.  Things seem okay for us, we live our lives, we get positive responses to good input stimuli, but we will accidentally sacrifice our freedom for the convenience that comes along with living together in a collective guided by a central data-analysis and control system - the internet

 

Maybe it's just a fundamental part of our evolution as a group of intelligent matter and we shouldn't resist it.  Where would we be if single celled organisms resisted the initial symbiotic relationships that resulted in them becoming individual parts of multicellular organisms?  Nowhere.  Maybe things are going okay.

Link to comment
Share on other sites

Just for clarification, the North Korean intranet does not pretend to link to American news outlets.

 

As to the OP, I mean this type of disinformation campaign is nothing new. China has had “wumao“ for a long time.

Link to comment
Share on other sites

As to the OP, I mean this type of disinformation campaign is nothing new. China has had “wumao“ for a long time.

 

Yes, paid posting is nothing new. I read an article that China found saturating discussions with paid posters to be a better solution than censorship.

 

I could have named the thread something else. I intended it for a broader topic. The recent demonstration of Russian paid posting is a convenient way to illustrate the extent to which norms for reinforcing skepticism are needed.

 

Paid posting in conjunction with Twitter bots, Facebook ads, and a hacked email cache dumped through wikileaks as parts of a Kremlin campaign to tip a US presidential election away from a Kremlin antagonist and in favor of a Kremlin stooge is a new development.

 

Evolving capabilities and applications of software continue to heighten the problem, such as with the ever-increasing amount of user data available and quickly improving machine learning capabilities.

 

Did digital propaganda tip the US election?

 

PA, FL, MI, and WI were projected to go to Clinton by a wide margin, and the differences between polling averages and the actual vote outcomes were historically great.

Link to comment
Share on other sites

 

In the future, no one knows what anything is.

 

 

We got there years and years ago, it's just that more people are finally starting to notice.

 

 

The only way to do truth is networks of trust. Scientists could doubt their instruments, but they trust the community that has verified their trustworthiness. Some things can seem to be proven by evidence, but turn a good lawyer loose on it. You will find that you are trusting an unverified source. Truths are probabilities.

 

We can establish norms as a society for better scrutiny. It's disturbing that so many can't distinguish between when someone talks like a scientist and when someone talks like a sales person. The inevitable adaptation to the modern world will be such critical thinking.

 

It's a pretty crazy situation that we will be forced to adapt in this way.

Link to comment
Share on other sites

 

 

In the future, no one knows what anything is.

 

 

We got there years and years ago, it's just that more people are finally starting to notice.

 

 

The only way to do truth is networks of trust. Scientists could doubt their instruments, but they trust the community that has verified their trustworthiness. Some things can seem to be proven by evidence, but turn a good lawyer loose on it. You will find that you are trusting an unverified source. Truths are probabilities.

 

We can establish norms as a society for better scrutiny. It's disturbing that so many can't distinguish between when someone talks like a scientist and when someone talks like a sales person. The inevitable adaptation to the modern world will be such critical thinking.

 

It's a pretty crazy situation that we will be forced to adapt in this way.

 

 

One of the more surreal things about things of late is while I remind myself so much of the online vitriol is composed of bots, trolls, spam accounts, etc. I'm having to wrestle with the fact that it's working. It's wearing me down mentally when I come across it but more distressing is the fact that I know people falling into the trap. Granted, some people become old and senile, or jaded and cynical, and ideological fringes are not new, but the speed and absurd turns it has taken with some people is baffling and sad. It's not the ideology itself, at this point that's a moot aspect, it's this attack on the basic notions of truth and justice. 

 

I know this is nothing new, but it's surreal to see as propaganda, doublespeak, etc. is fully exposed it's ironically become more persistent and effective than ever. It's almost like a disease spreading. Even though the US, as of now, is no way as bad as the societies of fascist and authoritarian governments of the past and future, when relatives and friends begin turning on each other and even resorting to killing via mobs, militias, despotic alliances, etc. the parallels are there. It's a civil war of rhetoric. 

Link to comment
Share on other sites

Politics and especially international politics and diplomacy has always been full of smoke and mirrors. It's very hard to find any kind of objective truth when almost everybody has their own agenda. Also there have always been all sorts of fake grassroots organizations and infiltration into NGOs, political groups, news organizations, etc. Maybe the interwebs just has accelerated this whole shit circus by an order of magnitude or then maybe it's just actually easier to see when the competing truths are so available and in your face for everybody.

Link to comment
Share on other sites

Politics and especially international politics and diplomacy has always been full of smoke and mirrors. It's very hard to find any kind of objective truth when almost everybody has their own agenda. Also there have always been all sorts of fake grassroots organizations and infiltration into NGOs, political groups, news organizations, etc. Maybe the interwebs just has accelerated this whole shit circus by an order of magnitude or then maybe it's just actually easier to see when the competing truths are so available and in your face for everybody.

 

When you compare different spins, look at a lot of information, pay attention to source attribution and consider perspectives from people you respect you can start to understand how things are working on the ground, in many cases. It takes a prolonged application of one's efforts at trying to get to the bottom of things. Some subjects are harder than others. 

 

It's way more than an order of magnitude. The ability to target individuals using their online user data alone is at least an order of magnitude. Propaganda used to be applied in blanket way, such as commercials on TV.

 

The targeting being automated is at least another order of magnitude. Facebook putting your ads in front of people likely to take to it happens automatically.

 

Being able to apply AI to this process is at least another order of magnitude. AI is adroit at finding patterns in data sets, and can profile people in order to successfully target them in ways conventional programmers couldn't dream of.

 

The feasibility of dumping money on troll farms to pose as an organic internet population is at least an order of magnitude. It's cheaper and more effective to employ paid actors for this purpose when their presence is in a computer screen.

 

The ability to loose bots on a social media platform is at least another one. They're clunky but numerous. There are millions and millions of them.

 

Propaganda has taken a quantum leap.

Link to comment
Share on other sites

Well, maybe it's more covert and in a way more intelligent now.

 

But let's take Russian propaganda. Is it really worse than during the Cold War when the Soviet Union tried pass it self off as a nation of peace in Europe and influenced lots of governments behind the scenes? Or early 20th century when pieces like the Protocols of the Elders of Zion were written somewhere in the Russian Empire as anti-semitic propaganda?

 

Maybe the Russian propaganda and political influence is a relatively new thing in the US but here in Finland it has been present for a long long time.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.