Journal of Free Speech Law: “Hostile State Disinformation in the Internet Age,” by Richard A. Clarke

0


The article is here; the Introduction:

State-sponsored disinformation (SSD) aimed at other nations’ populations is a tactic that has been used for millennia. But SSD powered by internet social media is a far more powerful tool than the U.S. government had, until recently, assumed. Such disinformation can erode trust in government, set societal groups—sometimes violently—against each other, prevent national unity, amplify deep political and social divisions, and lead people to take disruptive action in the real world.

In part because of a realization of the power of SSD, legislators, government officials, corporate officials, media figures, and academics have begun debating what measures might be appropriate to reduce the destructive effects of internet disinformation. Most of the proposed solutions have technical or practical difficulties, but more important, they may erode the First Amendment’s guarantee of free speech and expression. Foreign powers, however, do not have First Amendment rights. Therefore, in keeping with the Constitution, the U.S. government can act to counter SSD if it can establish clearly that the information is being disseminated by a state actor. If the government can act constitutionally against SSD, can it do so effectively? Or are new legal authorities required?

The federal government already has numerous legal tools to restrict activity in the United States by hostile nations. Some of those tools have recently been used to address hostile powers’ malign “influence operations,” including internet-powered disinformation. Nonetheless, SSD from several nations continues. Russia in particular runs a sophisticated campaign aimed at America’s fissures that has the potential to greatly amplify divisions in this country, negatively affect public policy, and perhaps stimulate violence.

Russia has created or amplified disinformation targeting U.S. audiences on such issues as the character of U.S. presidential candidates, the efficacy of vaccines,
Martin Luther King Jr., the legitimacy of international peace accords, and many other topics that vary from believable to the outlandish. While the topics and the social media messages may seem absurd to many Americans, they do gain traction with some—perhaps enough to make a difference. There is every reason to believe that Russian SSD had a significant influence on, for example, the United Kingdom’s referendum on Brexit and the 2016 U.S. presidential election. But acting to block such SSD does risk spilling over into actions limiting citizens’ constitutional rights.

The effectiveness of internet-powered, hostile foreign government disinformation, used as part of “influence operations” or “hybrid war,” stems in part from the facts that the foreign role is usually well hidden, the damage done by foreign operations may be slow and subtle, and the visible actors are usually Americans who believe they are fully self-motivated. Historically, allegations of “foreign ties” have been used to justify suppression of Americans dissenting from wars and other government international activities. Thus, government sanctions against SSD, such as regulation of the content of social media, should be carefully monitored for abuse and should be directed at the state sponsor, not the witting or unwitting citizen.

Government regulation of social media is problematic due to the difficulty of establishing the criteria for banning expression and because interpretation is inevitably required during implementation. The government could use its resources to publicly identify the foreign origins and actors behind malicious SSD. It could share that data with social media organizations and request they block or label it. A voluntary organization sponsored by social media platforms could speedily review such government requests and make recommendations. Giving the government the regulatory capability to block social media postings—other than those clearly promoting criminal activity such as child pornography, illegal drug trafficking, or human smuggling—could lead to future abuses by politically motivated regulators.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here