“Tweets span international boundaries, making for jurisdictional issues with respect to the adjudication of legal claims relating to them.” This is opening line in a lengthy decision of a British Columbia court in Giustra v Twitter, Inc., holding that the court had territorial competence over Twitter to adjudicate claims arising from defamatory tweets disseminated by users and relayed on Twitter’s social media platform.
The case is significant. First, because of Justice Myers’ holding that the B.C. court had jurisdiction over Twitter to rule on its liability. Second, because of his opinion that the court could rule on Twitter’s liability, if any, in Canada and in the U.S., without violating the international comity principle. Third, because should the case proceed to trial it would raise, for the first time in Canada, the issue as to whether a social media platform like Twitter can be liable for content first published by users. The latter question is the most significant, especially given the current context in which the content moderation role of social platforms is being hotly debated along with new regulatory frameworks to address illegal content on the Internet.
The plaintiff, Mr. Giustra, is a well known British Columbia business man and philanthropist. He has a substantial profile and reputation in both Canada and the United States. Commencing in 2015, Mr. Giustra was targeted by a group who vilified him for political purposes. The targeted attack was part of an orchestrated campaign to discredit him, in part, because of his charitable and philanthropic work in support of the Clinton Foundation. Unidentified Twitter users posted numerous tweets alleging his involvement in a supposed conspiracy known as “pizzagate” and accusing him of being in a pedophile ring. Many of tweets had a Canadian context. The tweets contained hashtags such as #Qanon, #pedogate and #LolitaExpress.
The claim alleges that each Tweet was accessed, downloaded and read by many people in B.C. and damaged Mr Giustra’s professional and business reputation. The evidence showed that he also has a substantial reputation in the U.S. where the tweets were read by people in that country.
Personal jurisdiction over Twitter
The first issue determined by the court was whether it has territorial competence over the claim against Twitter. The legal framework to determine territorial competence (jurisdiction simpliciter or personal jurisdiction) is set out in the B.C. Court Jurisdiction and Proceedings Transfer Act, (CJPTA). It is based on the Uniform Law Conference of Canada’s model legislation and was intended to codify the common law.
One basis for asserting personal jurisdiction is that “there is a real and substantial connection between British Columbia and the facts on which the proceeding against that person is based”. Under the CJPTA a presumption of such connection exists if the proceeding “concerns a tort committed in British Columbia”. Defamation is a tort that is committed in the jurisdiction in which the untrue statement is read or downloaded. While the presumption of a real and substantial connection can be rebutted, the presumption is strong and is difficult to overcome.
Mr. Giustra’s unchallenged allegation that the defamatory statements were read by persons in B.C. was sufficient to establish the presumption of jurisdiction simpliciter.
The Court also held that Twitter did not rebut the presumption. Mr. Giustra has a significant reputation and strong ties to B.C. It appeared there were at least 500,000 Twitter users in the province. Despite the lack of any finding or even evidence of the number of persons in the province (if any) who had read or accessed the defamatory tweets, the court held that “Giustra has gone far enough in demonstrating damage to his reputation here” sufficient to maintain the presumption of jurisdiction.
Twitter argued that as a platform which posts hundreds of millions of messages a day it could not be expected to defend actions in any jurisdiction in which an allegedly defamed person has a reputation and in which the offending tweet has been accessed.
The court rejected this argument saying it was “a somewhat circular argument, because it depends in part on the substantive law that might be applied, namely whether an un-mediated platform such as Twitter is legally responsible for content posted or tweeted by others”, a point the court stated “is not a settled point in Canadian law” and “not something that should be determined in a jurisdictional challenge”. The court also noted that Twitter had been given notice of the claim, thus suggesting that this was a relevant reason for rejecting Twitter’s argument.
In reaching the conclusion it did, the court did not appear to consider the fundamental argument made by Twitter. In the leading case on territorial competence in Internet defamation, Haaretz.com v. Goldhar, the Supreme Court of Canada explained that the presumptive connecting factors seek to bring “predictability and consistency” to the personal jurisdiction analysis. However, if a defendant shows that it would “not be reasonable to expect that the defendant would be called to answer proceedings in that jurisdiction”, then any presumption of jurisdiction will be rebutted.
When discussing how the presumption of jurisdiction could be rebutted, the Supreme Court focused on the extent of plaintiff’s reputation in the jurisdiction, but made clear that it was “not appropriate to propose an exhaustive list of factors that can rebut the presumption of jurisdiction”. It might well be that the group of Twitter users who vilified Mr. Giustra for political purposes should have expected to answer for their defamatory tweets in a B.C. court, but it by no means follows that a social media platform with over 145 million users who send and share hundreds of millions of tweets around the world on a broad array of topics would have that expectation for individual tweets published by users.
The court next turned its attention to whether it should decline to hear the case.
Under the CJPTA and at common law under the law of forum non conveniens, even if a court finds it has territorial competence, it may decline to take jurisdiction in the proceeding on the ground that a court of another state is a more appropriate forum. In this analysis, the court must consider a number of non-exhaustive factors. After considering these factors, the court held that California was not shown to be a more appropriate forum to hear the case. In coming to this conclusion the court made a number of important findings.
First, where there is territorial competence in B.C. over a defendant to enable it to bring claims for defamatory materials published in B.C., the court also has jurisdiction to adjudicate claims against the same defendant for defamatory content published in other countries. However, because the tort of defamation occurs where the defamatory materials are read or accessed, the law of the other territory would apply to such claims. Applied to this case, since there was jurisdiction simplicter given the B.C. tweets, Mr. Giustra could also assert claims for defamation for tweets read in the U.S., but under the U.S. law applicable to those tweets. Justice Myers observed, however, that “there may be some difficulty sorting out whether specific tweets were published in only one country and respective damage that flows from the locations of the tweets”.
Second, the court found that Twitter would have no liability under U.S. law for defamatory tweets disseminated by its users under the U.S. SPEECH Act and Section 230 of the Communications Decency Act (CDA). On this basis, the court concluded that “California cannot be an alternative forum at all much less the clearly more appropriate forum when the plaintiff would have no cause of action there for tweets published in British Columbia and harm suffered in B.C. to which B.C. law would apply under our conflict rules”. The court’s conclusion on this point raises questions. Juridical advantages to the defendant of litigating in another court are relevant to forum non conveniens. Twitter would undoubtedly enjoy a juridical advantage—civil immunity—if sued in California. But the court treated Twitter’s advantage as irrelevant, or even as a factor weighing against Twitter, simply because the advantage would be complete.
Third, the court concluded there would be no breach of comity for it to assume jurisdiction over Twitter. “It would not be ignoring California law, nor would it be making any evaluation of that law, as Twitter alleges. The court would be applying, in part, U.S. or California law. If Twitter is correct, Guistra will not be able to recover damages for tweets published in the U.S.”
In the result, the court concluded that it had jurisdiction over the claim and should not decline it.
In an important final paragraph in the reasons, the court recognised that in deciding if there was personal jurisdiction over Twitter it made other findings that arguably were not necessary for deciding that question. The court accordingly ended its reasons for decision with an important caveat.
A final comment. A jurisdictional challenge, like most interlocutory motions, is not the place to finally resolve complex issues of law. This is particularly so in this case where Haaretz left several issues unresolved and the court was split on several issues. While I conclude that this court has jurisdiction over this case and do not simply leave it to be determined by the trial judge, that does not mean that he or she is bound by the conclusions of law that I have made.
The liability of Twitter on the merits
Justice Myers made it clear that the judgment addressed only the jurisdiction to hear the claim and not whether Twitter had any ultimate liability to Mr Giustra. This issue could, ultimately, be the most important one.
There have been many cases addressing liability for Internet defamation in Canada, but none of them thoroughly canvassed the liability of a social media platform for the content made available by users. There are a plethora of cases from other Commonwealth countries that have examined the liability of hosting providers and search engines for third party content. But, none of these authorities have been applied in Canada in an analogous situation. Further, none of the authorities have considered the principles underlying defamation in a coherent way that takes into account the evolving analytical frameworks that have been developing in parallel in other jurisprudence of the Supreme Court, regulatory developments domestically and internationally, or Canada’s treaty obligations under the CUSMA. These issues, which could influence the outcomes of the case on the merits, are highlighted below.
There are a plethora of cases from other Commonwealth countries which address the liability of various type of service providers for disseminating defamatory content of users. For example:
- In the English case Godfrey v. Demon Internet Ltd. an ISP that hosted a newsgroup did not take down an allegedly defamatory posting after a notice was received. The defense of innocent dissemination was rejected. The court held that Demon could not be assimilated to a telephone company whose role in the transmission of messages is merely passive because it had chosen to receive the USENET newsgroup, to store it and to make it available to customers.
- In Tamiz v. Google Inc. Google, as the host of the blogger.com website, was sued for defamation for failing to take down or disable access to defamatory content once it had received notice that it was hosting such content. The English Court of Appeal expressed the view that Google could have been liable as a publisher of such material by failing to take it down once it had notice of the presence of the alleged defamatory material on its systems.
- In Bunt v. Tilley, in considering the potential liability of an ISP, an English court held that in order to hold someone liable as a publisher, it is not enough that a person merely plays a passive instrumental role in the process; there must be knowing involvement in the process of publication of the relevant words.
- In Galloway v. Frazer & Ors an Irish court ruled that the hosting site YouTube could be liable for not removing defamatory videos containing personal information posted on its site after receiving notice.
- In Metropolitan International Schools Ltd. vs. Designtechnica Corp, an English court ruled that search engines are not responsible for publishing defamatory statements by reason only of providing links to a website which contain defamatory material in search results. However, the court suggested that a search engine could be liable if it receives notice and takes no steps to disable access after receiving the notice.
- In Duffy v. Google, an Australian court found that Google was liable as a secondary publisher of defamatory information in search results once it received notice of the defamation claim. The court also held that Google could be liable as a republisher of defamatory content when users clicked on hyperlinks in search results and by defamatory content generated using its Autocomplete and Related Search Terms features. The findings of the trial judge were affirmed by the Australian Full Court.
- In Trkulja v. Google Inc. & Anor. an Australian court found that Google could be liable as a publisher by enabling its search engine to assemble content in its data bases in an automated manner that presented search results that were defamatory. The decision was ultimately affirmed by the High Court of Australia.
Canadian authorities and developments
There is no shortage of Canadian jurisprudence finding that persons that publish defamatory content online can be liable as (primary) publishers of the content. There are also cases in which persons that operate internet forums, websites, message boards, and social media pages can become liable for the content posted by third parties once they become aware of it. For example:
- In Weaver v. Corcoran a British Columbia court held that the operator of an internet forum, a reader comment area on a newspaper website, could be liable for publication of the defamatory postings on the forum once it becomes aware of such postings and takes no steps to remove them.
- In Baglow v. Smith an Ontario court held that message board operators could be liable for publishing third-party defamatory posts.
- In Kent v. Martin, a British Columbia court held that a media site that publishes defamatory content by linking to an article in its database can be liable for publishing the defamatory material at the link. The liability was premised on both the publication of the materials and linking to it.
- A British Columbia court in Pritchard v. Van Nes held that a person can be liable for postings made by third parties on social network pages it controls where the person has actual knowledge of the defamatory material posted by the third party, there is a deliberate act (that can include inaction in the face of actual knowledge), and there is power and control over the defamatory content.
- In Carter v. B.C. Federation of Foster Parents Assn., the plaintiff alleged that by mentioning an Internet address of an online discussion forum, the publisher of a newsletter was responsible for republishing defamatory comments published on that site. The B.C. Court of Appeal held that there was no publication.
- In Niemela v. Malamas, a B.C. court expressed the opinion that a search engine is not liable for publishing snippets prior to receiving notice of the defamation claim. The court held that the case against Google could not succeed, characterizing Google as “a passive instrument and not a publisher of snippets”. However, the decision expressly left open whether Google could be a publisher of snippets and search results after receiving notice of the defamatory content.
The most important Canadian authority on the issue is likely to be the Supreme Court’s leading case on Internet defamation, Crookes v. Newton. (I wrote a detailed summary of the case in the blog post, Hyperlinking and ISP liability clarified by Supreme Court in Crookes case.) In this case, the Court ruled that a pure hyperlink should not be regarded as a “publication” for defamation purposes. Although the case dealt mainly with that issue the court gave expansive reasons which also canvassed the importance of freedom of expression on the Internet and the liability of Internet intermediaries for the dissemination of defamatory content.
Members of the Supreme Court endorsed the court’s jurisprudence that passive instrumental acts by intermediaries should not be considered as, depending on the context, acts of publication, communication, or authorization. The reasons suggest that purely passive and content neutral Internet intermediaries will not be liable for publishing defamatory content merely because their systems are used to facilitate the publication of the content. However, the reasons also suggest that once Internet intermediaries do not act in content neutral ways, or acquire actual or constructive knowledge, or are negligent in failing to find out, that their systems are being used to disseminate defamatory content, they may be liable for publishing it unless they take steps to prevent the publication.
To be sure, the Crookes case did not canvass the liability of a social media platform that has millions of users that publish hundreds of millions of tweets a day. Nor has any Canadian case canvassed how the existing analytical frameworks for determining service provider liability for Internet defamation would apply or be adapted by the courts where a social media platform would be required to engage in content moderation after receiving notice to avoid becoming liable for the defamatory content. A recent law reform report by the Law Commission of Ontario, Defamation Law in the Internet Age, for example, recommended changes to defamation law to provide individuals with more robust remedies against Internet defamation, a notice and takedown regime, and that Internet intermediaries be provided with protection against the strict liability principles associated with the publication and republication rules under the common law of defamation.
The policy issues related to content moderation are currently the subject of considerable debate. For example, many commentators applauded the recent decisions of social media platforms to ban former U.S. President Donald Trump from their platforms and decisions of hosting providers to de-platform the right wing social media App Parler from their platforms. However, even in the case of these extreme examples leading to content and platform moderation some commentators expressed concern over whether these decisions should be left to the platforms, should be decided by some democratic institution, or independent organization. There are significant debates about how human rights and freedom of expression values should be balanced against addressing online harms using content moderation practices or other processes. These issues are central in recent proposals to update international regulatory frameworks to address serious online harms such as in the U.K., the European Union (with its draft Digital Services Act), Germany, Australia, and Canada. There has also been considerable debate in the U.S. on whether Section 230 of the CDA should be scrapped or clarified.  Recently, in seeking comments on a proposal to have ISPs block botnets the CRTC proposed that decisions about which sites to block should be decided by an independent entity.
Canada, the U.S. and Mexico also recently ratified the CUSMA. This treaty contains commitments by the parties to provide Interactive Computer Services providers with some measure of immunity for content disseminated by their users. Article 19.17 provides, in part:
1 The Parties recognize the importance of the promotion of interactive computer services, including for small and medium-sized enterprises, as vital to the growth of digital trade.
2 To that end, other than as provided in paragraph 4, no Party shall adopt or maintain measures that treat a supplier or user of an interactive computer service as an information content provider in determining liability for harms related to information stored, processed, transmitted, distributed, or made available by the service, except to the extent the supplier or user has, in whole or in part, created, or developed the information.
There is a footnote to Art. 19.17 that provides that “[F]or greater certainty, a Party may comply with this Article through its laws, regulations, or application of existing legal doctrines as applied through judicial decisions.”
The wording of the immunity in the CUSMA does not directly map onto the wording of Section 230 of the CDA. However, the treaty may raise questions about whether Canada’s commitments under the CUSMA have any bearing on the liability of Twitter in a case like this.
The Supreme Court has, in numerous cases including most recently in Nevsun Resources Ltd. v. Araya, confirmed the importance of the development of the common law “to keep the law aligned with the evolution of society”. There is also a presumption that Canada’s laws are intended to comply with our treaty obligations. These recent developments and considerations cannot be ignored in considering how the Supreme Court might rule in a case like this.
 2021 BCSC 54
 2018 SCC 28
 According to the court:
The majority in Haaretz held that the law applicable to defamation cases is the law where the tort occurred (the lex loci delicti rule). In internet defamation cases the tort occurs where the words are read…
In Haaretz, Côté J. recognised that the lex loci delicti rule in internet defamation cases might result in situations where multiple courts may assume jurisdiction and apply their own laws:
I recognize that in Internet defamation actions, where a tort may have occurred in multiple jurisdictions, the lex loci delicti rule may allow courts in multiple forums to assume jurisdiction and apply their own law. In an interconnected world where international players with global reputations are defamed through global publications, this is unsurprising…
A scenario Haaretz did not deal with was the possibility that a court could try a case applying several jurisdictions’ laws to it. Nevertheless, that is a concomitant of the lex loci delicti rule. In the present case, the applicable law to tweets published in British Columbia would be B.C. law. The law applicable to tweets published in the United States would, under the lex loci delicti rule, be the governing law there. To state the same proposition in the negative, on the assumption that Twitter Inc. has no physical or business presence in Canada, I do not see in this case how B.C. law could be applied to tweets relayed by Twitter and published in the United States. That would be an extra-territorial and impermissible application of B.C. law.
If the application of more than one jurisdiction’s laws were not the approach, the result would be that a plaintiff would have to bring multiple actions against the same defendant in different jurisdictions (something the law of forum conveniens strives to avoid), or a plaintiff would have to choose one jurisdiction only with its applicable law and forego claims in the others. Neither of those is a satisfactory solution. To put the matter another way, it cannot be the case that a defendant in a defamation action involving publication in multiple countries can successfully assert that the action be tried in one jurisdiction only (to avoid multiplicity of litigation), with the trial being limited to the publication in that jurisdiction only. That would be allowing the defendant to have it both ways…
All of this comes down to an answer to a major point raised by Twitter: that it would be a breach of comity to apply Canadian law to Twitter’s conduct in the U.S. Under the lex loci delicti rule, that activity would be governed by U.S law. If Mr. Giustra can show that he suffered damage in the U.S. because of publication in Canada, that would be subject to Canadian law. That also would not be a breach of comity.
 In addition, the court stated:
Twitter argues that there is a preferable forum which has a cause of action for defamation; it is just that Mr. Giustra will lose because U.S. law does not recognise any liability of Twitter for this type of defamation claim. I think that is overly simplistic. It is somewhat analogous to what Brown J. observed in Uber Technologies Inc. v. Heller, 2020 SCC 16, a case dealing with the enforceability of an arbitration clause:
. . . there is no good reason to distinguish between a clause that expressly blocks access to a legally determined resolution and one that has the ultimate effect of doing so. [Emphasis in original.]
As Brown J. also noted at para. 115, public policy requires access to justice and that is not merely access to a resolution. These comments are not inapt to the forum conveniens issues here.
 , E.W.J. No. 1226 (Q.B.D.).
  EWCA Civ 68 (14 February 2013).
  EWHC 407 (Q.B.)
  NIQB 7 (27 January 2016).
  EWHC 1765 (2B) (16 July 2009).
.  SASC 170 (27 October 2015).
 Google Inc. v. Duffy,  SASCFC 130 (4 October 2017). For a case comment on the Duffy case see, Barry Sookman, Google liable for defamation through search and autocomplete features: Duffy v Google.
 (No 5)  VSC 533 (12 November 2012).
 Trkulja v. Google LLC,  HCA 25 (13 June 2018). For a case comment on the Trkulja case, see, Barry Sookman, Google’s loss in online defamation case Trkulja v Google.
 The Canadian and International defamation cases are explained and analyzed in more detail in Chapter 11 (Jurisdiction and the Internet) of my book, Sookman Computer, Internet, and Electronic Commerce Law (2000-2020, ThomsonReuters)
 2015 BCSC 165.
 2015 ONSC 1175.
. 2016 ABQB 314).
  NIQB 7 (27 January 2016).
 2005 BCCA 398
 2015 BCSC 1024
 2011 SCC 47
 See, Society of Composers, Authors and Music Publishers of Canada v. Canadian Assn. of Internet Providers, 2004 SCC 45; also, Reference re Broadcasting Act, 2012 SCC 4; Reference re Broadcasting Regulatory Policy CRTC 2010-167 and Broadcasting Order CRTC 2010-168, 2012 SCC 68.
 See, Twitter Inc., “Permanent suspension of @realDonaldTrump”, 8 January 2021; Nick Clegg, “Referring Former President Trump’s Suspension From Facebook to the Oversight Board”, Facebook, January 21, 2021; Blayne Haggart “Platform Regulation Is Too Important to Be Left to Americans Alone”, CIGI Online, January 18, 2021, Jameel Jaffer Shannon et al “Does Deplatforming Trump Set a New Precedent for Content Moderation?” CIGI Online, January 18, 2021; Laura Kayali et al, “Brussels eclipsed as EU countries roll out their own tech rules”, Politico January 18, 2021; David Kaye, et al, “ Right Way to Regulate Digital Harms”, December 21, 2020; Bruna Martins dos Santos, “Four lessons for U.S. legislators from the EU Digital Services Act” Brookings, January 6, 2021; “Judge refuses to reinstate conservative social media site Parler after Amazon shutdown”, January 22, 2021.
 CRTC, Compliance and Enforcement and Telecom Notice of Consultation CRTC 2021-9, Call for comments – Development of a network-level blocking framework to limit botnet traffic and strengthen Canadians’ online safety, 13 January 2021
 2020 SCC 5
 See most recently, Quebec (Attorney General) v. 9147-0732 Québec inc., 2020 SCC 32.