Many Americans were shocked at the anti-Jewish rhetoric on display this month at a white nationalist rally in Charlottesville, Virginia. But not IU Bloomington faculty member Günther Jikeli and the students who took his contemporary antisemitism course in spring 2017.
The class spent the semester studying and even responding to online expressions of hate and prejudice in an effort to learn about best practices for combating antisemitism on social media.
“When these fringe groups go public, as in Charlottesville, they were confronted,” said Jikeli, visiting associate professor in IU Bloomington’s Borns Jewish Studies Program and Justin M. Druck Scholar at the Institute for the Study of Contemporary Antisemitism. “But on social media, the same messages are unchallenged. They are just disseminated, and they can grow even worse.”

As part of the U.S. State Department’s Diplomacy Lab initiative, the 15 students in the class conducted original research and produced a 25-page report, titled “Best Practices to Combat Antisemitism for Social Media.”
Diplomacy Lab lets the State Department “course-source” policy-related research to colleges and universities. Ira Forman, then the department’s special envoy for monitoring and combating antisemitism, proposed a project on countering online antisemitism, and Jikeli thought it would be a good fit for his course.
The students started by studying what has been written about online antisemitism. Then they developed and administered a survey of nongovernmental organizations that deal with the issue.
Finally they conducted their own computer research. They developed lists of keywords associated with online expressions of antisemitism and set about identifying individuals and groups that were disseminating antisemitic posts. And they experimented with ways to respond to the posts.
The class identified three categories of social-media users who were posting antisemitic messages:
- Committed white nationalists, neo-Nazis and white supremacists.
- Users who consider themselves anti-Zionists but who demonize or delegitimize Israel and the Jewish people in ways that meet the State Department’s definition of antisemitism.
- Users who accept and share antisemitic conspiracy theories such as those suggesting Jews control the media, financial systems and governments.
Using media accounts with assumed names to protect their privacy, students tried responding with straightforward messages (“Excuse me, that statement is antisemitic”); with more personal responses; and with humor, memes and persuasion. Nothing really worked, especially with the nationalists, who tended to double down on their extreme posts when confronted.

Jenna Solomon, an IU junior from the Chicago area, went into the course with eyes open. A Jewish studies major with a minor in Hebrew, she considers herself an activist who campaigns against antisemitism, anti-Zionism and racism. But she admitted to being shocked by some of the online hate.
“I came in knowing a lot, but I came out knowing more,” she said.
Students, several of whom were Jewish, spent part of the class preparing to encounter hatred online. Several months later, Solomon talked casually about the obscene slurs and threats but admitted to being stunned when one source insisted on denying that the Holocaust took place.
“That one really got to me,” she said. “Coming from a large Jewish community, I had never been exposed to anyone who would deny something so big, in the face of so much evidence and documentation.”
While responding to antisemitic posts almost never changed the users’ behavior, the report argues that there is value to engaging: It offers a counternarrative that may reach other social-media audiences, those who are just watching.
The most effective response, the report concludes, is to work with social media and Internet platforms to take down the worst posts and ban those who are posting them. The platforms’ terms-of-service agreements typically prohibit antisemitism, racism, hate speech and calls to violence, but enforcement of the agreements is often lax.
“It’s a complicated thing,” Jikeli said. “We have to ensure freedom of speech, but we don’t have to accept bullying and hate messages just because they are made online. The online world has become a real part of our social life. There has been a little bit of movement in the last two or three years, where Facebook and Twitter decided they had to take these concerns seriously. But there are not many people flagging these messages.”