Jacquelyn Mason

Pre-Thesis

October 21st 2019

- what is your topic (revisit the thesis statement template)

My general topic is misinformation and disinformation, where it begins, how it moves and spreads in online spaces and what content moderation can do to slow it down. More specifically using black online spaces and moderators as an example of how to content moderate. The findings within the Mueller report that African Americans were the most targeted by misinformation in the 2016 election has left me with some lingering questions. There seem to be data voids about the effects of this campaign. Was using the African American community a successful way of spreading dis/misinformation? Did this tactic work in any way? 

- why is this important to you? why should we care?

We should care because disinformation is pervasive and effective. We should care because the ultimate goal of spreading false news is to create such distrust within our society that we trust nothing. Companies like Facebook, Twitter and Google have no incentive to change the way they deal with dis/misinformation, and the American government is currently opposed to imposing any kind of legislation. These platforms are how the majority of Americans now obtain their news. If we cannot trust what is spread to be fact checked and fact based then our democracy is in danger.

- who do you need to talk to next? (revisit the stakeholders in your space) what questions do you have for them? what research questions are you investigating now? (and based on the questions think of alternative methods to ask those questions as in through participatory activities, experiments, etc.) --> look through the Universal Methods of Design for inspiration.

I will be talking to an associate who works with Color of Change to do some preliminary research on who is best to talk to within the community. My subjects would be activists, online private group moderators, other social media users who may have noticed or brushed up against bad actor activity online during the 2016 election. A few of the methods I would like to try are:

Observation, Case Studies, Cultural Probes, Behavioral Mapping, Interviews, Personas

- give us a one liner on something interesting you learnt: "Did you know that .... ?"

Did you know that there are five useful questions you should always ask yourself when verifying online information?

EG1_YXkX0AATJh0.jpg

October 7th - October 14th, 2019

The abstract idea for my thesis is misinformation, how it spreads on social media and beyond, and the best practices for curbing it and potentially slowing it down. I have not yet committed to a particular “form” and now looking at the bigger picture and reading more about intertwined conspiracies and content moderation, I do not think I will go that route. Instead I am thinking about disinformation and misinformation narratives, how they spread, where these narratives show up and when they do show up, how much are we able to see? I think I would like to stop focusing on content of disinformation and focus on why people are believing and spreading said content.

Bots, Bad Actors, Troll farms, Election and Census 2020

Great examples of community driven moderation

Great examples of community driven moderation

I have been considering the idea of looking at misinformation surrounding the 2020 Census as well as the targeting of black voters in the general election of 2016, and how it will inevitably change and potentially grow worse in preparation for the upcoming 2020 election.


Through some preliminary research, I have found that there are data voids around how misinformation affects black voters or the average black person online. The Mueller report has proven that black communities were the most targeted by foreign entities leading up to the 2016 election. However, I don’t believe anyone is asking the question: “why are black online communities disproportionately targeted when there is no quantitative data to prove that this works?” When in fact, black communities seem highly distrustful of misinformation and disinformation. My reasoning for this can be seen in the following articles.

Moderation gone wrong

Moderation gone wrong

In 2016 black feminists saw a wave of alt-right troll accounts on twitter. Long before gamergate and other instances however their insight was largely ignored. Trolls also descended upon a reddit forum called Black People Twitter which was also infiltrated by troll, and a system was put in place to determine “authentic black users” from bad actors. This is a great example of what community driven content moderation can look like instead of leaving it up to the platforms who often discriminate. Below is an example about how a post was banned from Facebook for using the term white people despite the fact that the post was an overall positive one:

Data Voids

I am also curious how data voids may or may not contribute to black voters perceptions about misinformation. Dana Boyd’s data voids can help me to prove or disprove this idea:

  • Data voids that are actively weaponized by adversarial actors immediately following a breaking news event, usually involving names of locations or suspects in violent attacks (e.g., “Sutherland Springs” or “Parkland.”)

  • Data voids that are actively weaponized by adversarial actors around problematic search terms, usually with racial, gendered, or other discriminatory intent (e.g., “black on white crime” or “The Greatest Story Never Told” or “white genocide statistics.”)

  • Data voids that passively reflect bias or prejudice in society but are not actively being weaponized or exploited by a particular group (e.g., “CEO.”)

According to a new report, Russian information operatives working for the Internet Research Agency had an "overwhelming operational emphasis on race ... no single group of Americans was targeted ... more than African Americans." 

I have some theories that the reason black people are the most popular misinformation target is just another  instance of fundamentally misunderstanding “other” rather than a proven working strategy. I’m also unsure if anyone has yet bothered to do research into the “why” of this targeting. Through field research. I want to determine exactly what works and what does not work within the African American community

Content Moderation

I am also interested in content moderation as a potential research topic. Presently, various platforms have technological tools available in an attempt to curb the spread of misinformation and disinformation. I do not believe that there is one “tool” that can stop this.  I think that content moderation is something that we the users will have to be socially responsible for, and we will need to continually educate ourselves on how to detect false news and information and be personally accountable to stop it from spreading.

To support these ideas, I’m currently in the process of reading a few texts about content moderation. Custodians of the internet and Speech Police: The Global Struggle to Govern the internet and Regulating speech in cyberspace. I think that irregardless of what kind of misinformation I choose, I would like my research to end with possible solutions for users and platforms. 

gillespie.jpg
  • “Many users—more and more every day—are all too aware of how social media platforms moderate. Believing that these platforms are wide open to all users, and that all users experience them that way, reveals some subtle cultural privilege at work. For more and more users, recurring abuse has led them to look to the platforms for some remedy. Others know the rules because they’re determined to break them. And others know about platform moderation because they are regularly and unfairly subjected to it. Social media platforms may present themselves as universal services suited to everyone, but when rules of propriety are crafted by small teams of people that share a particular worldview, they aren’t always well suited to those with different experiences, cultures, or value systems.” - Custodians of the Internet

  • “Moderation is hard because it is resource intensive and relentless; because it requires making difficult and often untenable distinctions; because it is wholly unclear what the standards should be; and because one failure can incur enough public outrage to overshadow a million quiet successes.” Tarleton Gillespie

  • “If moderation should not be conducted the way it has, what should take place” - Custodians of the Internet

Content moderation is a tricky subject to handle. Researchers and designers alike are envisioning creative ways to govern platforms  A text that I regularly consult for my research is Jenny Fan’s Jury Duty for the Internet: A Civics-Oriented approach to Platform Governance. The idea that we the people can form online juries to determine should or should not take place on a virtual landscape goes a long with my hypothesis that content moderation needs to be user driven:

“In a citizen-sovereign view of platform governance, the role of the user is re-framed from customer to citizen and the role of speech on platforms is re-framed from commoditized product to information commons. Eleanor Ostrom has previously identified design principles critical to the successful self-governance of common resources, such as community participation in rule making, monitoring for enforcement, graduated sanctions, and dispute resolution mechanisms [56]. Incorporating these principles into the design of platform governance can be effective in regulating behavior deemed undesirable or non-normative to an online community” [44].

In a recent white paper by Joan Donovan and Danah Boyd, they consider the idea of “strategic silence” around issues such as white violence and hate speech:

  • “Platform companies now curate news media alongside user generated content; these corporations are largely responsible for content moderation on an enormous scale. The transformation of gatekeepers has led an evolution in disinformation and misinformation, where the creation and distribution of false and hateful content, as well as the mistrust of social institutions, have become significant public issues.” Ecosystem”

  • “Content produced by users recruiting for extremist movements often breaks terms-of-service agreements and stronger moderation policies about amplification could flag users before they develop wide audiences.”

  • Moreover, the loudest voices currently claiming that moderation of information is censorship are often doing so to justify hate, harassment, and bigotry, challenging cyber-utopians’ assumptions of enlightened online speech.

  • The failure to develop interoperable standards for content moderation across platforms remains a significant reason why misinformation and Donovan and boyd 15 disinformation campaigns continue to be effective attacks against public trust in social and political institutions.

Misinformation Narratives

Above I have mentioned a great deal about how misinformation affects  the black community online, as well as overall content moderation. However, this week I took a step back to consider pursuing overall misinformation narratives in lieu of one type of misinformation. So for now, I am looking at the narratives that surround various types of misinformation: Some ideas that I am throwing around but am not yet committed to are: (Immigration, Vaccines, Climate Change, Hate Speech) and how they may all be interconnected. 

After I have determined the narratives, I think that it is also important to determine the factors that contribute to misinformation “susceptibility” and how to identify simple narratives that push mis/disinformation. Do data voids contribute to this susceptibility? I wonder if a way to solve this vulnerability to false information as if people in a community have questions about a certain type of misinformation is there someone that they can go to to receive fact based answers? This could help to counteract data voids, What I have learned throughout some of my preliminary research is that, If you are at the right time and place to engage with rumors, you can slow them down. 

Potential hypotheses:

How do narratives in different types of misinformation intersect?

How do we recognize different kinds of misinformation using a value system, and then protect vulnerable populations from that misinformation spreading within their communities?

Misinformation (particularly in meme form) begins mostly on 4chan and Reddit then seeps into social media platforms. Predominately Twitter. How does this misinformation make it to the media

10 User Stakeholders

Facebook/Instagram

Twitter

Google

Amazon

Color of Change

DNC

New York Times

Jornalism Schools

Universities

Stakeholder Map

Stakeholder.jpg

This week I decided to take a step back to committing to one type of misinformation. I have instead decided that for now I would like to focus on narratives.

What are the different narratives?

Where do these narratives show up?

And when they do show up, how much do we see?

I have been thinking about belief systems surrounding certain types of misinformation. Taking the largest kinds (Immigration, Vaccines, Climate Change, Hate Speech) and applying a set of beliefs to them, then see how they are connected.

I committed hours of research to refining my topic as my topic is LARGE. The idea of misinformation is quite abstract, and trying to curb or stop it’s spread is even more difficult. As I have expressed before, I do not believe that there is a “tool” that can stop the spread of misinformation although many platforms have tried. I believe it is something that humans are going to have to learn how to detect. I have been thinking about the overall Narrative. “what are the factors that contribute to Misinformation “susceptibility” and how to identify simple narratives that push mis/disinformation. Such as if people in a community have questions about a certain type of misinformation is there someone that they can go to to receive fact based answers? This could help to counteract data voids, What I have learned throughout some of my preliminary research is that, If you are at the right time and place to engage with rumors, you can slow them down.

Misinformation (particularly memes) begins mostly on 4chan and Reddit then seeps into social media platforms. Predominately Twitter. How does this misinformation make it to the media/

I want to attempt to find some ways to quantify misinformation for example “There is currently no way of quantifying that a drop in measles vaccines in the Philippines is attributed to the misinformation surrounding the drug dengvaxia.

Thinking about how conspiracies move, Also thinking about monitoring a current disinformation campaign and seeing how that goes



Assignment 3 September 30th, 2019

Research

screencapture-slate-technology-2019-04-black-feminists-alt-right-twitter-gamergate-html-2019-09-30-11_33_16 copy.png
Screen Shot 2019-09-29 at 7.18.42 PM.png

While I have yet to solidify my exact topic just yet. I do know that I will be working in the topic of misinformation and disinformation online, and how that affects different groups. I am leaning (but have not yet settled) On looking into how misinformation flow online, do data voids affect unregistered or undecided voters, and how will misinformation affect the 2020 election, and which groups are most vulnerable and would be most targeted. I have revisited some older articles to see what kinds of questions I might want to ask, or to investigate. I have a plethora of articles to draw from, but I have been looking for relevant books to use for research. Below are a couple examples:

One Person, No Vote: How Voter Suppression Is Destroying Our Democracy

One Person, No Vote: How Voter Suppression Is Destroying Our Democracy

Participatory Culture in a Networked Era: A Conversation on Youth, Learning, Commerce, and Politics

Participatory Culture in a Networked Era: A Conversation on Youth, Learning, Commerce, and Politics



One Word

My one “vague word” to describe my project is Understanding. I want to better understand how the internet works, how social media works, What makes people “tick” What makes people want to vote and participate. What makes people want to post certain content or even read certain content. I think this understanding is the route to understand how misinformation and disinformation flow and exist online

IMG_3412.jpeg

Lists

I came up with lists where each list has 10 entities. I found it helpful to do this again using post it’s so I could move things around as they came to mind.

IMG_3411.jpeg

Games

IMG_3429.jpeg

Mash is an easy game that we would play in elementary school that helps to predict your future. Classically, M.A.S.H. stands for (Mansion, Apartment, Shack, House) You write that in big letters at the top of a piece of lined paper and then choose the 4 different categories. I used words from some different phrases and categories that I created (above) Next you create a spiral and depending on the number of rings there are is equivalent to how many turns you use. You then go around each word and category eliminating the number that you land on. I did this 4 times with 4 different categories and different words and phrases for each:

IMG_3430.jpeg

For the fifth round, I took the “winners” from each round to play a final time. Here are my results:

“Voter, parks, who is vulnerable? Is the internet inherently good.

Albeit wacky, this outcome has got me thinking how I may want to approach my research. Parks can be a large source of community. They help to keep people bonded. There are often concerts, protests and farmers markets within these spaces.

Assignment 2 September 23rd, 2019

From Inspiration to Questions: What Inspires Me

IMG_3377.jpeg

Questions

Can we use the importance of ethics as a model to change the way we share information online as a means to curb the spread of misinformation and disinformation?

Can we use the demonetization of public figures who promote hate speech and misinformation as a means of silencing them?

Can we curb public shaming online, which is proven to have a devastating impact on a users social, political, and financial life?

Can moderators of private groups impose an ethical system or a list of “god Principals” in what members share within those groups?

Do data voids contribute to Voter Suppression?

Mapping Exercise

IMG_3402.jpeg
Screen Shot 2019-09-29 at 8.24.45 PM.png

Assignment 1 September 9th, 2019

A little about me: I’m fascinated with the ways people (and platforms) utilize social media. Misinformation, disinformation,“cancel culture” and social media addiction are topics that I am considering pursuing for my thesis.

In the last several months, I have been working at TED conferences on a project called CIVIC. In this time, I have been working for fellows who are pursuing different ways to approach misinformation online. For my final presentation, I wrote this and presented my findings to the CDC, WHO, Facebook, Twitter and Google. My objective being to convey to them better ways to curb the spread of misinformation and disinformation. Also please keep in mind that this is a draft, but it really gives some background to what I have been working on and thinking about as far as future hypothesis. Feel free to skim, but the overview is: I like to use crowdtangle, google trends and other methods of data “scraping” to find hashtags and use those hashtags to find out how misinfo travels.

I also like the idea of demonetizing public figures who promote hate speech as a means of silencing them. This article gives me hope that this could be a possibility.

I’m interested in the different ways the large companies can actually control the news that we consume.

I’m interested in the way public shaming actually gets accounts more followers on platforms, and how that can fuel “group think” motivated attacks.