Search engines liability for defamation – Trkulja v Google

November 28th, 2012 by Barry Sookman Leave a reply »

Is a search engine liable for publishing defamatory materials that are assembled for the first time in an automated manner by its programmed computers? In the recent Australian case Trkulja v Google Inc LLC & Anor (No 5) [2012] VSC 533 (12 November 2012), a jury found Google liable. The trial judge confirmed the jury’s ruling holding that search engines are publishers for the purposes of defamation law when their computers produce and put together search results in accordance with their intended operation.

The plaintiff, Michael Trkulja, claimed a Google search using his name returned search results that displayed his picture and name together with pictures of other individuals either known to have committed serious criminal offences or against whom serious criminal allegations had been made and an article with the heading “Melbourne crime”. He claimed that the search results were defamatory because they created the false imputation that he “was so involved with crime in Melbourne that his rivals had hired a hit man to murder him”. The jury agreed and he was awarded (Aus) $200,000 in damages. This was in addition to the (Aus) $225,000 in damages that Mr Trkulja recovered against Yahoo in the Trkulja v Yahoo! Inc [2012] VSC 88 case arising from the same circumstances.

Google applied for judgment notwithstanding the jury’s verdict. It argued that, as a matter of law, it was not a publisher of any defamatory materials because it had not “consented to, or approved of, or adopted, or promoted, or in some way ratified, the communication of the material complained of, relying on the UK cases Metropolitan Schools Ltd v Designtechnica Corp’n [2011] 1 WLR 1743, Tamiz v Google Inc [2012] EWHC 449, and Bunt v Tilley [2006] EWHC 407, and the decision of the Supreme Court of Canada in Crookes v Newton [2011] 3 SCR 269, for these assertions.

The trial judge ruled that, on the facts of the case, Google was a publisher. It had put in place a fully automated search service that had combined materials in such a way that created the false innuendo that the plaintiff was involved with crime in Melbourne. Unlike other cases, no person had previously published this defamatory material. Having programmed its computers to produce an intended, but not in the circumstances the specifically foreseeable, result, it became the publisher of the defamatory materials. According to the court:

The plaintiff accepted (correctly in my view) that he had to establish that Google Inc intended to publish the material complained of. While much was made by counsel for Google Inc of the fact that there was no human intervention between the request made to the search engine and the publication of search results, and of the fact that the system was “fully automated”, the plaintiff’s point was that Google Inc intended to publish everything Google’s automated systems (which systems its employees created and allowed to operate) produced. Specifically, the plaintiff contended that Google Inc intended to publish the material complained of because while the systems were automated, those systems were the consequence of computer programs, written by human beings, which programs were doing exactly what Google Inc and its employees intended and required. On this basis, it was contended that each time the material complained of was downloaded and comprehended, there was a publication by Google Inc (the operator and owner of the relevant search engines), as intended by it. So it was submitted by the plaintiff that Google Inc was a publisher throughout the period in respect of which complaint was made…

The question of whether or not Google Inc was a publisher is a matter of mixed fact and law. In my view, it was open to the jury to find the facts in this proceeding in such a way as to entitle the jury to conclude that Google Inc was a publisher even before it had any notice from anybody acting on behalf of the plaintiff. The jury were entitled to conclude that Google Inc intended to publish the material that its automated systems produced, because that was what they were designed to do upon a search request being typed into one of Google Inc’s search products. In that sense, Google Inc is like the newsagent that sells a newspaper containing a defamatory article. While there might be no specific intention to publish defamatory material, there is a relevant intention by the newsagent to publish the newspaper for the purposes of the law of defamation.

By parity of reasoning, those who operate libraries have sometimes been held to be publishers for the purposes of defamation law. That said, newsagents, librarians and the like usually avoid liability for defamation because of their ability to avail themselves of the defence of innocent dissemination (a defence which Google Inc was able to avail itself of for publications of the images matter prior to 11 October 2009, and all of the publications of the web matter that were the subject of this proceeding).

As was pointed out by counsel for the plaintiff in his address to the jury, the first page of the images matter (containing the photographs I have referred to and each named “Michael Trkulja” and each with a caption “melbournecrime”) was a page not published by any person other than Google Inc. It was a page of Google Inc’s creation – put together as a result of the Google Inc search engine working as it was intended to work by those who wrote the relevant computer programs. It was a cut and paste creation (if somewhat more sophisticated than one involving cutting word or phrases from a newspaper and gluing them onto a piece of paper). If Google Inc’s submission was to be accepted then, while this page might on one view be the natural and probable consequence of the material published on the source page from which it is derived, there would be no actual original publisher of this page…

To say as a general principle that if an entity’s role is a passive one then it cannot be a publisher, would cut across principles which have formed the basis for liability in the newsagent/library type cases and also in those cases where someone with power to remove a defamatory publication chooses not to do so in circumstances where an inference of consent can be drawn…

Further, while on the facts in Bunt, the defendants were correctly described as “internet intermediaries” (whatever may be the legal consequences of such a description), it is, with respect, doubtful that that same description can be applied to an internet search engine provider in respect of material produced as a result of the operation of that search engine. That said, any such “internet intermediary” is, in any event, performing more than the “merely passive role … [of] facilitating postings” (Cf Bunt).

It follows that, in my view, it was open to the jury to conclude that Google Inc was a publisher – even if it did not have notice of the content of the material about which complaint was made.

Google also argued that even if it had notice that its search results were returning materials that were defamatory and did nothing to prevent it, that it still was not liable as a publisher. It contended that even with notice, it is not capable of being liable because no proper inference about it adopting or accepting responsibility complained of can ever be drawn from its conduct in operating a search engine. This submission was also rejected by the trial judge.

The question is whether, after relevant notice, the failure of an entity with the power to stop publication and which fails to stop publication after a reasonable time, is capable of leading to an inference that that entity consents to the publication. Such an inference is clearly capable of being drawn in the right circumstances (including the circumstances of this case). Further, if that inference is drawn then the trier of fact is entitled (but not bound) to conclude that the relevant entity is a publisher. Google Inc’s submission on this issue must be rejected for a number of reasons, the least of which is that it understates the ways in which a person may be held liable as a publisher.

Google also claimed it was entitled to judgment on the basis that it had established the defense of innocent dissemination under Section 32(1) of the Australian Defamation Act. That Act provides a defense to the publication of defamatory matter if the defendant proves that (a) the defendant published the matter merely in the capacity, or as an employee or agent, of a subordinate distributor; and (b) the defendant neither knew nor ought reasonably to have known, that the matter was defamatory; and (c) the defendant’s lack of knowledge was not due to any negligence on the part of the defendant. On the facts, Google had not made out the defense because the “jury may well have concluded that Google Inc failed to establish that it ought not have reasonably known that the relevant matter was defamatory and/or that it had not established that any lack of knowledge on its part was not due to its negligence.” The jury concluded, however, that Google was an innocent dissemination prior to receiving notice of the claim.

Comments

The Trkulja v Google case is one in a series of recent cases which has canvassed the extent to which the law holds a person responsible for the legal consequences of acts carried out by computers programmed by or for the person for an intended purpose. The case raises important questions. Should a search engine, or other programmed services, be totally exonerated from all damages suffered by the public as a result of their use, as Google claimed in this case? Should a search engine only be liable where it has some level of knowledge that its actions are causing or are likely to cause damage and does not take steps to avoid it? Google also sought to avoid liability in the case for knowingly making allegedly defamatory search results available to the public after getting notice. Or, should a search engine that does not merely republish other materials, but actually creates the combination of associations by “cutting and pasting” from data in its databases, be liable like any other publisher of defamatory material, or is there a need for a special rule to fairly address the interests of all stakeholders? The issues, as Ashley Hurst recently pointed out in the article Internet Libel, Part 1: What makes it Different?, are complex.

As a matter of general principle, a person that automates a process using a computer to carry out an intended purpose is responsible in law for the acts carried out by his or her tool. Liability is not avoided by automating the actions in question. This principle was recently affirmed in the decision of the BC Supreme Court in Century 21 Canada Limited Partnership v. Rogers Communications Inc., 2011 BCSC 1196. In that case the defendant sought to avoid becoming liable for web site terms it knew about that deemed use of the web site to be agreement to the web site terms by using a search engine to crawl the site. The court found the defendant Zoocasa Inc. had agreed to the site terms when it employed a search engine to crawl and copy portions Century 21’s web site. According to the court:

Does the fact that Internet search engines access sites such as those of Century 21 using search engines, crawlers and robots change the legal situation? In my view it does not…

A machine or a computer and the software that runs it has at some point been constructed and programmed by an individual. As noted by Sookman at 10.5:

“… an electronic agent, such as a computer program or other automated means employed by a person, is a tool of that person. Ordinarily, the employer of a tool is responsible for the results obtained by the use of the tool since the tool has no independent volition of its own. When computers are involved, the requisite intention flows from the programming and use of the computer.”

I agree with this statement. Liability is not avoided by automating the actions in question.

Despite this general rule, courts have struggled with questions involving making owners of programmed systems responsible for damages suffered by third parties as a result of their use, in many cases due to policy concerns about imposing such liability. In the U.S., for example, some courts have declined to find service providers liable for acts of copyright infringement carried out by pre-programmed computers doing exactly what they were intended to do on the basis that the automated actions lack “volition” on the part of the service provider. A leading case in this regard is Cartoon Network Ltd. v. CSC Holdings Inc., 536 F. 3d 121 (2d Cir. 2008) where the United States Second Circuit Court of Appeal ruled that Cablevision’s subscribers, and not Cablevision, were responsible for making copies of TV programming for later viewing as part of Cablevision’s remote storage DVR service, even though Cablevision designed, built and operated the systems that made the copies of the television programming. Central to this line of cases is the view that there is a significant difference between an entity that makes copies at the request of an individual and an entity that uses pre-programmed computers to carry out the request automatically.

The issue of attribution of responsibility, although not always expressly analyzed by the courts, involves an inquiry as to whether the actor has caused the act that is alleged to cause the harm. It involves the application of traditional tort causation principles to ascertain whose conduct has been so significant and important a cause that he or she should be legally responsible. A person’s responsibility is generally based on whether the person caused or contributed to, or materially contributed to, the delictual act. See, Athey v. Leonati, [1996] 3 SCR 458, Resurfice Corp. v. Hanke, 2007 SCC 7 (CanLII), [2007] 1 SCR 333, Blackwater v. Plint, [2005] 3 SCR 3. On this basis a person that  designs, programs and operates a machine to carry out an act would normally be responsible for causing it or materially contributing to it.

Recently, some courts that have examined users’ responsibility for the delictual acts of their automated systems have given greater recognition to the pre-programmed intentions of the system designer/user in determining issues of causation. For example, on very similar facts to the Cablevision case, in National Rugby League Investments Pty Limited v Singtel Optus Pty Ltd [2012] FCAFC 59 (27 April 2012), the Full Court of the Federal Court of Australia ruled that the operator of the TV Now service that enabled individuals to time shift programs to a cloud for later viewing, was jointly responsible with users for causing the making of copies of the television programming using its automated systems, Similarly, in Australian Competition and Consumer Commission v Google Inc [2012] FCAFC 49 Google was found liable by the Full Court of the Federal Court of Australia for causing the publication of misleading advertising through the automated use of its AdWords advertising service.

For policy reasons, some causes of action like the common law action for defamation, provide intermediaries with a defense where they are an innocent disseminator. In the Canadian Crookes v Newton 2011 SCC 47 case, several justices of the Supreme Court of Canada considered the applicability of the “innocent disseminator” defense in the Internet context. Relying on cases in the fields of defamation and copyright law, including its own decision in SOCAN v Canadian Assn. of Internet Providers, [2004] 2 SCR 427, the court described the attributes of an “innocent disseminator” as being an entity that acts in a content neutral manner, having no actual knowledge of an alleged libel or infringement, and not being aware of circumstances to put it on notice to suspect a libel or infringement, or being negligent in failing to find out about it, and whose role in the dissemination of the information is merely passive, instrumental, and automated. Under this approach, a court could find an entity like Google is a publisher of materials it disseminates, but potentially grant it a defense from the damages its service causes where it meets the “innocent disseminator” conditions.

Returning to the Trkulja v Google case, the jury made findings of fact that Google’s search service was not merely a passive, automated tool used by third persons to disseminate the defamatory materials. It found as a fact that Google had programmed its system to produce useful combinations and associations of information from various locations on the Internet. As such, on a causation based liability analysis, the jury had a factual basis to find Google responsible for causing the publication of the information. As a publisher, Google might in some cases have a defense of innocent dissemination based on the principles described by the Supreme Court in the Crookes v Newton case. On the facts of the Trkulja v Google case Google did not meet the conditions for the defense once it had reasonably known or ought to have known that the relevant matter was defamatory and took no steps to stop it. As the jury found for Google in the pre-notice period, the court did not have to rule on the very interesting and important question as to whether Google’s role in cutting and pasting information in a way not previously published was “content neutral” and passive to enable it to make out the defense prior to receiving notice of the claim.

The Trkulja v Google case (and Tamiz v Google case) has been appealed. There will likely be more to come.

Print Friendly
Advertisement