Coauthor Julie S. Lalonde’s lived experiences with cyberviolence1 highlight the kinds of barriers girls face when they are targeted online and the inadequacy of the responses they receive when they report it. The concepts we examine are grounded in Julie’s experiences as a young feminist antiviolence advocate who is fully engaged online. First, we problematize the idea that empowering girls with technological skills and knowledge on how to report cyberviolence can adequately address the barriers it poses. Second, we discuss the increased risks faced by girls who subvert gender norms in online spaces. Third, we interrogate existing reporting options available to girls. We argue that current practices by social media companies and law enforcement undermine the utility of reporting as one tool in an equality tool kit that should be available not only to support girls in dealing with attacks but also to advance the systemic changes required to create an online environment in which girls’ participation can truly flourish.2 Our conclusion suggests necessary reforms.
Ever since I was a young girl, I’ve been an early adopter of online technology. I got my first e-mail address in 1995. Long before Tumblr and social media, I discovered feminism through a now defunct website for girls called Purple PJs. It had forums and columns written by and for young women. As a budding feminist living in an isolated small town in Northern Ontario, I was exposed, through Purple PJs, to the knowledge that other young girls were out there, railing against the patriarchy too. The Internet has always been a place where I’ve gone to engage as a feminist and find community, but it has also become a place where I face resistance and attacks.
Institutional Responses to Girls’ Reporting is Inadequate
To say that girls need to be empowered to engage as equals online risks minimizing the agency, power, and resilience that girls demonstrate every day. Rather than arguing that society needs to empower girls, we argue in this article that society needs, instead, to create an environment in which girls can freely and equally exercise their rights of self-determination and autonomy without having to demonstrate superhuman powers of resilience in order to do so. We argue that institutions responsible for addressing girls’ online harassment need to provide transparent and effective reporting systems that address cyberviolence meaningfully.
We recognize that a multifaceted approach that addresses girls’ immediate online safety needs, along with the root causes of the cyberviolence targeted at them, is necessary for girls to engage fully in digital spaces (Bailey 2016; Fairbairn et al. 2013). As Shaheen Shariff and Ashley DeMartini (2015) point out, lawmakers, social media companies, educators, and the media must engage with girls to develop strategies to remove systemic barriers, provide timely support for those targeted by online attacks, and develop mechanisms for supporting girls’ own efforts to bring down those barriers. Several authors have examined how legislators and policy makers could improve legislative responses to cyberviolence against girls (Bailey and Steeves 2015; MacKay 2012; Shariff 2015). In this article, however, we focus on current approaches taken by law enforcement and social media companies and argue that these institutions are failing to develop adequate strategies to remove systemic barriers and provide adequate responses for girls targeted by online attacks.
As Sarah Heath (2015) notes, girls are often told that their empowerment lies in having the reporting tools and technological skills necessary to protect themselves from virtual attacks. We argue that it is unfair to place the burden of girls’ realization of their right to equal online participation squarely on their own shoulders without confronting the community’s obligation to address meaningfully the systemic barriers that have proved impossible for girls to overcome with technological skills and tools alone (Milford 2015). In fact, according to Amanda Hess (2014), the reporting solutions girls have been taught to rely on in order to address cyberviolence are often ineffective at ending the abuse.
Further, as Dustin Kidd and Amanda Turner (2016) remind us, some of the most widely publicized cases of cybermisogyny have been aimed at young women with exceptional technological skills who dared to challenge gender norms in the gaming industry. Although these young gamers had a strong technological skill set, including knowledge of reporting procedures (Alexander 2016), many of their complaints went uninvestigated and unresolved by the institutions that were supposed to protect them (Merlan 2015). These and other examples, like those of Rehtaeh Parsons and Amanda Todd,3 suggest that reporting to law enforcement agencies and social media companies may yield little return for girls targeted by cyberviolence. In the cases of Parsons and Todd, although both girls reported the abuse, they were told there was little to nothing that the police or social media companies could do about the harassing behavior, regardless of the relentless and sexualized harassment they faced. Proper investigations were not launched until after both girls died by suicide (Garossino and Cavoukian 2014; White 2015).
Technological Skill Does Not Necessarily Lead to Empowerment
My work with Hollaback! And Draw-the-line.ca educates young people on digital and nondigital ways of preventing sexual violence. I also work to change how people understand online violence. I am well versed in the safety strategies available and, unfortunately, am forced to use them actively myself.
I am a prolific Twitter user, with more than 13,000 followers, and a frequent media source on issues of violence against women. Between a strong Twitter presence and additional publications in the mainstream media, I’ve been privileged to have a platform for my work. But with that privilege comes incredible backlash. My online presence draws the ire of Internet misogynists. I face regular online attacks by misogynists who criticize my work, challenge my history of violent victimization, and threaten my well-being. Even with my skill set in preventing cyberviolence—it is my job to know how to deal with these threats—I have not been able to stop all the attacks against me. How can we expect anything more from girls?
Systemic Change Is Necessary
Despite a growing awareness of the systemic marginalization of girls online, the empowerment of girls through individualized reports of abuse and online safety skills continues to be widely touted as a solution for ending cyberviolence against them (OIPC 2015). danah boyd (2014) notes that girls are taught to protect themselves from predation by managing their online behavior (see also Bailey 2014). This includes keeping their social media accounts private (boyd and Hargittai 2010), using reporting options (Pulido 2013), and not expressing themselves online at all (Heath 2015). Jordan Fairbairn and Dillon Black (2015) note that these methods of self-protection are often the primary approach that girls hear but, unfortunately, as Jordano Navarro and Jana Jasinski (2013) note, they have been ineffective in achieving full inclusivity for girls.
Remarkably, despite existing barriers, examples abound of girls’ uses of digital tools for self-expression and self-determination. In fact, girls are prolific users of the Internet, where they dominate image based social media platforms (Perrin 2015), develop their identities (Hodkinson and Lincoln 2008), create content, tackle systemic issues, and promote their interests and safety needs (Daniels 2009). Studies show that girls are skilled at using the safety management options (boyd and Hargittai 2010) and tools available to them, however inadequate those tools may be (Madden 2013).
However, technological know-how, safety strategies, and reporting skills are only part of a strategy for creating welcoming spaces for girls online. Women and girl-led organizations such as the Purple Sisters Youth Advisory Committee, Hollaback!, Take Back the Tech, and Trollbusters organize and advocate for girls’ technological rights, pushing for systemic change and developing girl-oriented nonviolent technologies. As Fairbairn and Black (2015) point out, these organizations focus not only on individual safety strategies but also on addressing intersecting systemic issues underlying cyberviolence, such as racism, homophobia, and sexism (Daniels 2009). In so doing, they shift the focus away from making individual girls responsible for their own misfortunes and draw attention to the roles and responsibilities that other stakeholders ought to be assuming in order to ensure that equality is a lived experience, rather than just another formalistic but unkept promise (Bailey 2015).
Girls’ efforts to refocus attention on other stakeholders is particularly important in the context of law enforcement agents and social media companies whose responses to reporting too often fall short. Solely focusing on empowering girls with technological know-how, including the skills to report attacks, further weakens the already inadequate contribution that these institutions are making toward ending cyberviolence against girls.
Girls Who Subvert Gender Norms Online Are at Particular Risk
As is the case with many other feminists, Twitter trolls are the background noise of my life. If I’m appearing in the media, speaking out against gender-based violence or advocating for women’s rights, I am guaranteed to be subjected to a deluge of harassing tweets, hate-filled e-mails, and even, on a few occasions, threatening voicemails. It pretty much comes with the territory.
This online harassment is not limited to a few individuals posting their distaste for my work. Groups of abusers actively coordinate against me and other feminist advocates, barraging us with hateful, violent, and obscene messages in their attempts to stop us from doing our work. One of the more recent attacks happened in the summer of 2016. I dealt with a series of impersonators who were repeatedly creating fake Twitter accounts that looked nearly indistinguishable from my own with the same picture, the same information, and a similar name. Not happy to simply annoy me, they used those accounts to tweet xenophobic messages to local organizations working with refugees. They were trying not only to harass me but also to discredit me in my community.
One of the most disappointing results of this kind of harassment is that young women and girls who are initially inspired by my work to be feminist or antiviolence advocates, are later discouraged when they Google me and see the level of harassment that comes with this work. They’ve told me that they don’t think they could handle it. This harassment may not silence me, but it silences future leaders.
Cyberviolence Is Used to Control Girls
As Maeve Duggan’s (2014) research shows, girls are more likely to experience severe forms of cyberviolence in general and sexualized cyberviolence in particular. This is especially so for girls who subvert gender norms (Mascheroni et al. 2015) or explicitly challenge male privilege in online communities (Kidd and Turner 2016). Unfortunately, reporting the abuse they experience has not reduced the level of abuse that they face (Hess 2014).
Girls are targeted online because of intersecting systemic issues including (but not limited to) sexism, racism, homophobia, and ageism (Fairbairn and Black 2015). When girls either assert their rights or express themselves outside the bounds of stereotypically white heterosexual femininity online (Regan and Sweet 2015), their expression is often policed by other users (Senft and Baym 2015; Steeves 2015), they are publicly shamed (Lippmann and Campbell 2014), and they are threatened for their transgressions (Fairbairn 2015). Perpetrators of this kind of cyberviolence are often white men or boys who target girls they perceive as outsiders, including feminist, lesbian, and racialized girls (Daniels 2008). As Shariff (2015) argues, they often use the threat of sexualized violence as a means of silencing expressions of feminist power in the hope of pushing challenges to patriarchy and male power out of the public sphere (see also Hess 2014). One of the clearest examples of this is Gamergate, an online movement where misogynistic male video gamers systematically target and harass women and girls, such as Anita Sarkeesian, who speak out about sexism in video game culture, often using the hashtag #Gamergate (Kidd and Turner 2016). When girls witness feminist leaders who challenge gender norms being harassed online with no recourse, they question their own ability to be leaders in this area, thus quashing their potential to advocate for systemic change (Hess 2014).
Some attackers have published sexualized images of girls online without their consent as a means to degrade and humiliate them. As Emily Linden (2015) makes clear in her documentary film Unslut, onlookers, who may not realize that these images were taken or distributed without the consent of the girls in them, often interpret these images as girls stepping outside of socially acceptable expressions of female sexuality and, in turn, they begin shaming the girls online. In some cases, images of girls being sexually assaulted or images that were taken without their knowledge have been used to harass the same girls for allegedly breaching gender norms, regardless of their lack of consent to either the image being taken or to the violent sexual attack recorded.
Identity-based attacks against girls are part of a greater trend of silencing women and girls’ voices online that cannot be fully addressed through improving girls’ technological skills or knowledge of reporting mechanisms. Unsatisfactory outcomes from reporting to social media and law enforcement agencies suggest that these institutions are not contributing toward ending cyberviolence as fully as they can and should be.
Inadequacy of Responses to Reports of Cyberviolence
I am regularly attacked on social media. I receive almost daily threats and harassment by other users who disagree with my work. The regularity of these attacks has made reporting to social media second nature to me. Screenshot the attack, save a copy, report it, repeat. Sometimes reporting to social media has helped me, but often my reports are brushed aside and the attacks continue. In the recent situation, when impersonators were out to damage my reputation in my community, Twitter did not make it easy for me to bring an end to the attacks.
Twitter can shut down impersonation accounts that are not parodies. This is in their rules, but they have the discretion to decide which accounts are parodies and which are not. There is no guarantee that the account will be taken down, just like other offensive tweets. In my case, the impersonators were not parodying me; they directed racist and offensive comments at local immigration groups in my name, trying to hurt my reputation and subjecting those groups to racist attacks.
When I contacted Twitter to deal with the impersonators, I was told I had to submit a copy of my identification to prove that I was the real Julie S. Lalonde, even though my account had been on the platform since early 2011 and had more than 10,000 followers in contrast to the obvious impersonators who were mimicking my account. Their accounts had been up and running for just a few days with only a handful of followers.
Twitter put the burden on me rather than helping me stop the abuse immediately. This is particularly disconcerting for people like me who live in daily fear of being doxxed (having one’s personal information posted online in order to facilitate harassment). Sending a copy of my driver’s license to a faceless server at Twitter headquarters is, in and of itself, a very stressful process. I was worried that my address would get hacked by my attackers and published online. Each time I went through this process and the impersonator’s account was taken down, a new impersonator would pop up and I had to go through the process of sending in my identification again while the impersonator continued to post offensive tweets using my likeness, while Twitter verified my complaint and identification on each separate occasion.4 This is just one example of how social media companies are failing to help women and girls who are being attacked; often they do nothing at all.
Social Media Companies: Terms of Service, Reporting Protocols, and Profiling
Social media companies’ standard form terms of service (TOS) contracts, privacy and reporting policies, data collection, and behavioral advertising strategies shape the online environment in ways that significantly affect girls’ experiences online (Bailey 2015). As Thorsten Busch and Tamara Shepherd (2014) point out, since social media companies are the providers of some of the dominant spaces for public discourse and social interaction, their increasing impact on people’s everyday lives arguably renders them quasi-governmental. With that shift in power should come increased responsibility for social media companies to create and maintain, accountably and transparently, safe and respectful online spaces that facilitate girls’ equal participation, rather than their victimization. This includes responding adequately to reports of abuse (Jaffer and Brazeau 2012). However, as Caitlin Dewey (2015) notes, some social media companies themselves have acknowledged that they have work to do in order to live up to this responsibility.
Unfortunately, standard form TOS are too often mechanisms for nontransparent, unilateral exercises of corporate authority (Busch and Shepherd 2014), which may be particularly problematic for girls targeted by cyberviolence. Nevertheless, in order to gain access to social media sites, girls must agree to these standard forms,5 something that, as Robert Glancy (2014) has said, most individuals rarely read or understand. As found by Michelle Sargent (2013), the design, content, and format of online contracts may be beyond the reading comprehension and literacy skills of many girls (and probably many adults, for that matter).
While most social media sites include reporting and safety strategies in their TOS, these provisions often offer very limited protection to girls targeted by cyberviolence for a number of reasons. First, as Rima Ather (2015) reports, it is difficult, if not impossible, to find accurate data on what forms of abuse are investigated and how often the rules in the TOS are enforced, making it difficult for girls to assess what types of complaints will be addressed by social media companies. Second, many of those who do complain find that their reports are not taken seriously, so much so that even threats of death and rape may not be deemed as having violated the TOS (Gandy 2014). Logically, girls are reluctant to report if they do not think they will be taken seriously (Shariff 2015). Third, as can be seen in TOS, such as those of Twitter (2017), companies typically draft standard form TOS to define only vaguely the rules relating to content and to maximize the company’s exclusive discretion to interpret and apply the rules contained in them. As a result, users most often have no way of holding social media companies legally accountable for failing to enforce them (Scott and Eddy 2016). Vaguely defined, nontransparent exercises of discretion and lack of action by social media companies create little reason for girls to believe these companies are actively working to end cyberviolence or to provide spaces in which girls’ right to equal participation can be realized.
Even where companies such as Facebook design technology to address issues like nonconsensual distribution, as Emma Ellis (2017) makes clear, other issues remain. For example, these technologies do not disturb the serious impact that the commercial data-in-exchange-for-service model underlying the Internet can have on privacy, equality, and other cherished liberties and values (O’Neil 2016). Algorithms extract and analyze the data each person sheds in their daily interactions online in order to sort people into categories for marketing purposes (Gillespie 2014). For girls and young women, this may well mean exposure to stereotypical representations of hypersexualized, white, heteronormative femininity to which self-representations will be compared in order to measure social success. As documented by Jane Bailey and Valerie Steeves (2015), falling short of or exceeding this razor-thin line may, in turn, disproportionately expose girls to attack both online and off.
In order to live up to the responsibilities arising from the degree of control they exert over an increasingly indispensable element of public infrastructure, social media companies should, at minimum, be required to develop transparent and accountable policies and practices aimed at preventing cyberviolence on their platforms while also providing meaningful assistance to those who are victimized (OIPC 2015). Strong privacy settings and anonymous (Shariff 2015) easy-to-use reporting options paired with clear parameters for swift enforcement and removal of offending content (Estable and Meyer 2015) would better support girls who are targeted (Bailey 2015; Jaffer and Brazeau 2012).
Enhanced legal regulation of the use and retention of data by social media companies may also be necessary, particularly in light of expressed concerns by girls about the long-term reputational impacts that can arise from harmful content (Bailey 2015). Perhaps most importantly, girls must be not only consulted but also engaged in policy making by social media companies and public policy makers alike in order to ensure that any measures adopted reflect girls’ expertise, lived experience, needs, and aspirations (Estable and Meyer 2015).
Finally, it is important to recognize the potential overlap between private responses to cyberviolence by social media companies and public responses by law enforcement agencies. To put it simply, it is essential that we remain absolutely clear that state imposed limitations and sanctions, such as criminal code provisions, are not trumped by clauses in the TOS of social media companies.
I reported being harassed and impersonated on Twitter to the local police. I have contacted the police many times throughout my career about online threats and harassment, and the police are aware of who I am and the threats I receive. What I was experiencing could have amounted to criminal identity fraud or criminal harassment, and it should have been investigated. Local police were not helpful; they simply referred me to the Twitter policy on how to report harassment. They told me that if Twitter allowed for parody accounts, there was nothing they could do about my impersonators. When I pushed back, I was told they probably couldn’t do much but I should file a report anyway for “statistical purposes.” If I’m being brushed off by law enforcement, I can’t help but wonder who is actually being helped.
Law Enforcement Responses
The Criminal Code in Canada includes several offences (such as criminal harassment and nonconsensual distribution of intimate images) that cover a variety of forms of cyberviolence to which girls are subjected (Bailey 2016). In company with Shariff (2015), we suggest that long-term change is more likely to be achieved through an emphasis on proactive human rights-based responses aimed at root causes than by reactive and punitive criminal approaches. However, because criminal remedies may be both appropriate and necessary responses in extreme cases, and because girls are taught that they should turn to law enforcement as a way to stop online harassment, it is imperative that law enforcement agents responsibly receive and investigate girls’ reports of cyberviolence.
Currently, criminal sanctions for cybercrimes are too sporadically applied,6 and many young women and girls have lost faith in reporting to law enforcement because, in part, of a lack of response by the police (Merlan 2015). As Marie Sinha’s (2013) report makes clear, cyberviolence in general, like other gender-based crimes against girls, is underreported and under-prosecuted (see also Keats Citron 2014). Not believing that law enforcement will take the complaint seriously is cited as one reason why gender-based crimes are underreported, especially by girls (Sinha 2013; UN 2013). Perhaps this also explains why less than 10 percent of all girls’ cyberviolence incidents are reported to police (Mazowita and Vézina 2014).
The initial police response to the report of cyberviolence against Amanda Todd provides a telling example. When the cyberviolence against the teenaged girl was reported, rather than investigating the violence, police shifted the focus to Amanda’s behavior, advising her to close down her social media and e-mail accounts to avoid the abuse (OIPC 2015). Only after her suicide did the police assign a number of officers to investigate her case. Unfortunately, as Danielle Keats Citron (2014) demonstrates, the inadequate police response in Amanda’s case is not unique. Too often, women and girls who report cyberviolence to police have their cases dismissed outright or are faced with police inaction because of an alleged lack of police capacity to investigate (Hess 2014) or to an alleged absence of criminal offences that are applicable to the situation (MacKay 2012).
Law enforcement agency deference to the TOS of social media companies to resolve issues and unrealistic advice to girls to address their attacks themselves by simply turning off social media are problematic. This kind of police inaction feeds a public perception, identified by Fairbairn (2015), that these are not issues that, in many cases, merit public redress according to law. Further, and in any event, a social media provider’s standard form TOS cannot trump applicable criminal law.
Notwithstanding the existence of criminal laws applicable to many forms of cyberviolence, unsupportive police responses and concerns about the long-term efficacy of criminal law responses suggest that law enforcement officers require further training and that additional resources should be devoted to proactive, human rights-based approaches. These changes, in addition to criminal remedies, will be essential in ending cyberviolence against girls.
In the end, I had to find my own solution to stop the impersonators. They did not stop opening up new Twitter accounts after I reported them and got them taken down. I had to reach out to my followers on Twitter and the media for help. A lawyer who follows me on Twitter suggested that I use my intellectual property rights to go after the impersonators who had been using copies of my photos from my original Twitter page to mimic my account without having the right to use these images. I tweeted that I would go after my impersonators for copyright violation. Simultaneously, I did an interview with Buzzfeed during which I named my experience of being impersonated and the absurd so called solutions that Twitter proposed. It was soon after that that the impersonators finally stopped. I think threatening to sue them for copyright infringement, as well as demonstrating that I had an audience and a public voice, was why they finally stopped. Reporting to Twitter and the police had not effectively slowed them down.
As Julie’s story so aptly illustrates, reporting abuse to social media companies and law enforcement is not contributing toward ending cyberviolence against women and girls as fully as it can and should be. Empowering girls with technological skills and educating them on available reporting options is futile unless the institutions to which girls report respond to their complaints in a way that respects girls’ expressive and equality rights to fully engage online. When law enforcement agencies and social media companies do not have the resources or impetus to respond adequately to girls’ reports of cyberviolence, reporting tools are rendered powerless to resolve these incidents and communicate their seriousness to the public. For this reason, well-designed reporting tools that are properly resourced, with staff committed to resolving complaints in a publicly transparent and accountable way, make up one component of an equality tool kit for creating an environment conducive to the free and equal exercise of girls’ rights to self-determination and autonomy. At a minimum, these institutions should provide girls with publicly available information on their procedures for addressing cyberviolence and provide transparent data on how many cases are reported, which types of reported cyberviolence have violated the law or TOS, and what action was taken to resolve the complaint.
Institutions that benefit from girls’ online presence have a public duty to meaningfully address abuse that works to exclude girls’ participation. Social media companies, as institutions invested in and asserting a significant degree of control over young people’s online interactions (Steeves 2015); public policy makers, whose decisions paved the way for young people’s seamless online/offline existence (Bailey 2016); and law enforcement agents, who are responsible for addressing crime, are all stakeholders with a responsibility to contribute toward the creation of online environments in which girls can be equal participants and content creators. In carrying out these responsibilities, these institutions must actively engage girls in their policy-making processes, recognizing them both as experts in their own lives and aspirations, and as social agents worthy of community respect and support.
Special thanks to the Social Sciences and Humanities Research Council for funding the eQuality Project, a seven-year partnership initiative of which this article forms part.
In this article, we use the term cyberviolence to denote the wide spectrum of violence girls experience online, including but not limited to online harassment and misogyny that results in psychological or emotional harm as well as technologically facilitated physical violence (for example, sexual assaults that are filmed and posted online). It is also important to note that cyberviolence is also often rooted in racism, colonialism, homophobia, transphobia, and ableism (Fairbairn and Black 2015).
While many existing and potential responses could be discussed here, given space constraints we will focus on the roles of social media companies and law enforcement. For a more detailed discussion of these and other legal approaches, see Bailey (2016).
Amanda Todd was a teenage girl from British Columbia who died by suicide after being sexually exploited online by a 35-year-old man. Rehtaeh Parsons was a teenage girl from Nova Scotia. Images of Rehtaeh being sexually assaulted were distributed online. Harassment related to the images was linked to her eventual suicide.
Since these incidents, Twitter has changed its policies and has allowed Julie’s account to be officially verified. However, this service is available only to “accounts of public interest” and will not be available to most girls.
Because, in all Canadian provinces and territories, those under 18 cannot be held to be legally bound to the contracts they sign (subject to certain exceptions), the legal status of girls “agreeing” to TOS is in and of itself an interesting question, meriting further research and examination. See, for example, Infants Act, RSBC 1996, c. 223 s. 19 for children in British Columbia. In all other provinces and territories, common law holds that minors lack the capacity to contract, subject to certain exceptions.
Athar, Rima. 2015. From Impunity to Justice: Improving Corporate Policies to End Technology-Related Violence Against Women. Report to the Association for Progressive Communications.
Bailey, Jane. 2014. “Time to Unpack the Juggernaut? Reflections on the Canadian Federal Parliamentary Debates on ‘Cyberbullying.’” Dalhousie Law Journal 37 (2): 661–707.
Bailey, Jane. 2015. “A Perfect Storm: How the Online Environment, Social Norms, and Law Shape Girls’ Lives.” In Bailey and Steeves 2015: 21–54.
Bailey, Jane. 2016. “Canadian Legal Approaches to ‘Cyberbullying’ and Cyberviolence: An Overview.” Ottawa Faculty of Law Working Paper No. 2016–37. doi:10.2139/ssrn.2841413.
boyd, danah, and Eszter Hargittai. 2010. “Facebook Privacy Settings: Who Cares?” First Monday 15 (8). doi:10.5210/fm.v15i8.3086.
Busch, Thorsten, and Tamara Shepherd. 2014. “Doing Well by Doing Good? Normative Tensions Underlying Twitter’s Corporate Social Responsibility Ethos.” Convergence 20 (3): 293–315.
Daniels, Jessie. 2008. “Race, Civil Rights, and Hate Speech in the Digital Era.” In Learning Race and Ethnicity: Youth and Digital Media, ed. Anna Everett, 129–154, Cambridge, MA: MIT Press.
Daniels, Jessie. 2009. “Rethinking Cyberfeminism(s): Race, Gender, and Embodiment.” Women’s Studies Quarterly 37 (1–2): 101–124.
Dewey, Caitlin. 2015. “Twitter CEO Dick Costolo Finally Admits the Obvious: Site Has Failed Users on Abuse.” Washington Post, 5 February.
Duggan, Maeve. 2014. “Online Harassment.” Pew Research Center, 22 October. http://www.pewinternet.org/2014/10/22/online-harassment.
Ellis, Emma Grey. 2017. “Facebook’s New Plan May Curb Revenge Porn, but Won’t Kill It.” Wired, 6 April. https://www.wired.com/2017/04/facebook-revenge-porn.
Estable, Alma, and Mechthild Meyer. 2015. Creating a Safer Digital World for Young Women. Toronto: YWCA Canada. http://ywcacanada.ca/data/documents/00000460.pdf.
Fairbairn, Jordan, and Dillon Black. 2015. Cyberviolence against Women and Girls. Ottawa: Ottawa Coalition to End Violence against Women. http://www.octevaw-cocvff.ca/sites/default/files/CyberViolenceReport_OCTEVAW.pdf.
Fairbairn, Jordan, Rena Bivens, and Myrna Dawson. 2013. Sexual Violence and Social Media: Building a Framework for Prevention. Ottawa: Ottawa Coalition to End Violence against Women. http://www.violenceresearch.ca/sites/default/files/FAIRBAIRN2.pdf.
Fairbairn, Jordan. 2015. “Rape Threats and Revenge Porn: Defining Sexual Violence in the Digital Age.” In Bailey and Steeves 2015: 229–252.
Gandy, Imani. 2014. “#TwitterFail: Twitter’s Refusal to Handle Online Stalkers, Abusers, and Haters.” Rewire, 12 August. https://rewire.news/article/2014/08/12/twitterfail-twitters-refusal-handle-online-stalkers-abusers-haters.
Garossino, Sandy, and Raffi Cavoukian. 2014. “Did Police Miss Chance to Protect Amanda Todd From Blackmailer?” Huffington Post, 27 April.
Gillespie, Tarleton. 2014. “The Relevance of Algorithms.” In Media Technologies: Essays on Communication, Materiality and Society, ed. Tarleton Gillespie, Pablo Boczkowski, and Kristen Foot, 167–194. Cambridge, MA: MIT Press.
Heath, Sarah. 2015. “Security and Insecurity Online: Perspectives from Girls and Young Women.” In Bailey and Steeves 2015: 361–384.
Hess, Amanda. 2014. “Why Women Aren’t Welcome on the Internet.” Pacific Standard, 6 January. https://psmag.com/why-women-aren-t-welcome-on-the-internet-aa21fdbc8d6.
Hodkinson, Paul, and Sian Lincoln. 2008. “Online Journals as Virtual Bedrooms? Young People, Identity and Personal Space.” Young 16 (1): 27–46.
Jaffer, Mobina S. B., and Patrick Brazeau. 2012. Cyberbullying Hurts: Respect for Rights in the Digital Age. Report to the Senate Standing Committee on Human Rights, Ottawa.
Kidd, Dustin, and Amanda Turner. 2016. “#GamerGate: Misogyny and the Media.” In Defining Identity and the Changing Scope of Culture in the Digital Age, ed. Alison Novak and Imaani J. El-Burki, 117–139. Hershey, PA: IGI Global.
Lippman, Julia, and Scott Campbell. 2014. “Damned if You Do, Damned if You Don’t … if You’re a Girl: Relational and Normative Contexts of Adolescent Sexting in the United States.” Journal of Children and Media 8 (4): 371–386.
MacKay, A Wayne. 2012. Respectful and Responsible Relationships: There’s No App for That. Report to the Nova Scotia Task Force on Bullying and Cyberbullying.
Madden, Mary, Amanda Lenhart, Sandra Cortesi, Urs Gasser, Maeve Duggan, Aaron Smith, and Meredith Beaton. 2013. “Teens, Social Media, and Privacy.” Pew Research Center, 21 May. http://www.pewinternet.org/2013/05/21/teens-social-media-and-privacy.
Mascheroni, Giovanna, Jane Vincent, and Estefania Jimenez. 2015. “‘Girls Are Addicted to Likes So They Post Semi-naked Selfies’: Peer Mediation, Normativity and the Construction of Identity Online.” Cyberpsychology: Journal of Psychological Research on Cyber Space 9 (1): 30–43.
- Search Google Scholar
- Export Citation
)| false . , Mascheroni, Giovanna , and Jane Vincent Estefania Jimenez 2015. “ ‘Girls Are Addicted to Likes So They Post Semi-naked Selfies’: Peer Mediation, Normativity and the Construction of Identity Online.” Cyberpsychology: Journal of Psychological Research on Cyber Space 9( 1): 30– 43.
Merlan, Anna. 2015. “The Cops Don’t Care about Violent Online Threats: What Do We do Now?” Jezebel, 29 January. http://jezebel.com/the-cops-dont-care-about-violent-online-threats-what-d-1682577343.
Milford, Trevor Scott. 2015. “Revisiting Cyberfeminism: Theory as a Tool for Understanding Young Women’s Experiences.” Bailey and Steeves 2015: 55–82.
Navarro, Jordana, and Jana Jasinski. 2013. “Why Girls? Using Routine Activities Theory to Predict Cyberbullying Experiences Between Girls and Boys.” Women and Criminal Justice 23 (4): 286–303.
OIPC (Office of the Information and Privacy Commissioner for British Columbia). 2015. Cyberbullying: Empowering Children and Youth to be Safe Online and Responsible Digital Citizens. Report to the OIPC and Representative for Children and Youth.
Perrin, Andrew. 2015. “Social Media Usage: 2005–2015.” Pew Research Center, 8 October. http://www.pewinternet.org/2015/10/08/social-networking-usage-2005-2015.
Regan, Priscilla M, and Diana L Sweet. 2015. “Girls and Online Drama: Aggression, Surveillance, or Entertainment.” In Bailey and Steeves 2015: 175–197.
Sargent, Michelle A. 2013. “Misplaced Misrepresentations: Why Misrepresentation-of-Age Statutes Must Be Reinterpreted as They Apply to Children’s Online Contracts.” Michigan Law Review 112 (2): 301–330.
Senft, Theresa, and Nancy Baym. 2015. “What Does the Selfie Say? Investigating a Global Phenomenon.” International Journal of Communication 9: 1588–1606.
Shariff, Shaheen. 2015. Sexting and Cyberbullying: Defining the Line for Digitally Empowered Kids. New York: Cambridge University Press.
Shariff, Shaheen, and Ashley DeMartini. 2015. “Defining the Legal Lines: eGirls and Intimate Images.” In Bailey and Steeves 2015: 282–305.
Steeves, Valerie. 2015. “‘Pretty and Just a Little Bit Sexy, I Guess’: Publicity, Privacy, and the Pressure to Perform ‘Appropriate’ Femininity on Social Media.” In Bailey and Steeves 2-15: 153–174.
UN (United Nations). 2013. Access to Justice for Children: Report of the United Nations High Commissioner for Human Rights. UNGA, 25th Sess, Agenda Items 2 and 3. UN Doc A/HRC/25/35.