Tech by Blaze Media

© 2024 Blaze Media LLC. All rights reserved.
AI will make spam worse, but banning it is disastrous ​
SOPA Images via Getty Images

AI will make spam worse, but banning it is disastrous ​

I read with interest a new piece on AI risks by Return contributor Jon Askonas and economist Samuel Hammond. It’s a sensible counter to some of the AI doomer alarmism that’s currently fashionable, and I endorse its general approach to the big questions, i.e., focus on risks that are foreseeable and manageable, and don’t let outsized fears of an AI apocalypse drive you to endorse the construction a global anti-AI leviathan.

But when it comes to some of the authors’ recommendations for managing the risks we all know are there — specifically around the ability of large language models to greatly increase the volume and effectiveness of spam, phishing attempts, and disinformation campaigns — I have objections.

Askonas and Hammond write:

Consider the potential for current-level AI to create an explosion in internet spam. Bad actors will need to make use of open source genera­tive models, as such activities clearly violate the terms of service of large API providers like OpenAI. Fortunately, open source models are several steps behind the state of the art. The good guys thus have an advantage in the adversarial race to build tools for combatting AI-generated spam. Any regulation that slowed down their effort would risk tilting the bal­ance back in favor of the bad actors, as they would feel no obligation to follow the same rules.

Nevertheless, regulation is needed to prevent powerful models from going open source in the first place.

I agree that LLMs can and will turbocharge the production of harmful or nuisance content, but I emphatically do not agree with the authors that we need regulation that restricts open-source AI.

I don’t say this because I’m some hyper-libertarian — there could indeed be a role for government regulation to play in fixing our spam/malware/disinfo problems. But regardless of whether the solution comes from the public or private sector, it will only work if it addresses the root cause of all of these problems: the misaligned incentives that currently govern our information ecosystem.

We’ve known about the fix for spam for at least 25 years

I’ve written on jonstokes.com about the deep structural reasons why we have many of the problems we do on the web — clickbait, harassment and abuse, pages overloaded with ads, and so on. I encourage you to read that, but I won’t summarize it here.

Instead, I’ll zero in on the specific example that Askonas and Hammond raise in their article: spam.

Under our existing email system, users pay more to receive messages than senders pay to send them. Users pay indirectly by diverting scarce attention to either email provider ads or to the spam itself, or they also pay for storage and email service. But either way, in the aggregate a single spam campaign costs far more for the total mass of recipients to get it and process it than it does for a sender to send it.

The result of this flawed incentive structure, then, is we have lots of spam. Still.

But ever since I began covering the spam problem as a journalist back in 1998, I’ve been aware that there’s a widely known, very easy fix for all this: Redesign the internet’s distributed messaging layer so that senders bear more of the cost of a mass email campaign than receivers.

To expand on this a bit, a “sender pays” scheme could be configured in any number of ways. One popular idea is a micropayments-based messaging system in which every user can set a price that a message sender has to pay them in order to send them a message. Here are some variants on this idea:

  • If a receiver adds a sender to his address book, inbound messaging from that sender is free.
  • A receiver could choose to refund a sender if he found a message useful.
  • Receivers could set different inbound messaging rates for different TLDs, domains, or addresses.

Ultimately, though, the point of these schemes is that if senders must send a small amount of money to a receiver along with a message in order to ensure the receiver sees that message, this will change the incentive structure for messaging in a way that eliminates the ROI from massive spam and phishing campaigns.

Why haven’t we fixed the spam problem?

MIKE CLARKE via Getty Images

Given that everyone has known for decades that a “sender pays” model fixes the spam problem, it’s worth asking why we’ve never made the transition to this model. It’s especially odd that we’re not doing this when both Google and Apple are in the perfect position to implement such a scheme on top of their popular email services.

I don’t have anything like a complete answer to this question — there’s probably a whole book on this idea for anyone with the time and inclination to write it. But I have to imagine a big part of the issue is that if I want to receive payments on the internet, I have to go through a know-your-customer process that involves uploading documents for identity verification and entering information about my account at some financial intermediary.

So for reasons that mostly have to do with terrorism and money laundering laws, getting set up to receive money over the internet is a complicated process that requires handing over a ton of sensitive information, so you don’t want to do it too many times.

It seems pretty clear that the main barrier to moving to a “sender pays” model is that while we want it to be easy for everyone, everywhere to send payments online, we want it to be difficult for randos and anons to receive payments.

Another more minor barrier is the network effect, i.e., if I’m in a minority of the population using a boutique “sender pays” email system, then there are whole categories of emails I just won’t get, some of them useful — e.g., password resets from random online vendors, notices and alerts from my kids’ school, email updates from the car dealership’s service shop on the progress of my repairs, etc. But it’s easy to think of ways to overcome this, e.g., the first email from a domain is free so I can get it and add it to my address book, but the rest require a micropayment.

It seems pretty clear that the main barrier to moving to a “sender pays” model is that while we want it to be easy for everyone, everywhere to send payments online, we want it to be difficult for randos and anons to receive payments. Only the KYC’d and 1099’d should be able to take delivery of money.

Of course, the moment a viable payments layer emerged that let anons receive money in an extremely low-overhead way — receiving bitcoin is even easier than signing up for a free mail service — the state has gone all out to kill it. We really, really seem to want to keep tabs on who is getting paid.

We’ll keep screwing this up

Sticking with “receiver pays” for the internet’s main messaging protocol is a deliberate decision that we’ve made as a society, and rather than reverse it, we’re now contemplating the addition of a new anti-AI enforcement regime to that decision’s list of major downsides. Rather than use the power of government to encourage a mass migration to a “sender pays” messaging model, we’re instead poised to use it to crack down on open-source LLMs.

Not to put too fine a point on it, but this sucks and is stupid … which is why it’s probably inevitable.

So as LLMs start to disappear from app stores and Github repos and migrate onto the dark web, remember that it didn’t have to be this way. We didn’t have to live in a world where only mega-corporations and offshore criminals have unfettered access to the latest machine learning models, while startups and the general public have to first prove that they’re not criminals in order to access that same tech. We could have fixed all of this decades ago, but the forces of centralization have prevented it in the name of fighting terrorism and stopping money launderers and tax cheats.

Postscript: The micropayments' sad path

Let’s say we decide not to ring-fence AI with regulation in the name of fighting spam and “disinfo” — maybe because the genie’s out of the bottle and this is impossible as a practical matter — and instead we move to a “sender pays” model as the standard for online messaging. That’s a win, right?

It all depends on the implementation. If instead of using Bitcoin or another public blockchain for messaging, payments, and identity, we build our micropayments-based “sender pays” model on central bank digital currencies, that would be maximally bad.

A CBDC-based sender pays model would certainly fix the spam problem, and it would also “fix” the disinfo problem by giving the government incredibly fine-grained control over who can say what to whom. I guess this is good if you trust the government to decide what we can say to each other, but for the rest of us, no thanks.

So an integrated payments and messaging layer with a centrally managed filter would be the worst possible outcome — far worse than an internet that’s flooded with LLM-generated content — unless you’re a technocrat who wants to stop people from spreading anti-establishment narratives in a peer-to-peer fashion. We should fight this with everything we have.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
Jon Stokes

Jon Stokes

Jon M. Stokes is co-founder of Ars Technica. He has written extensively on microprocessor architecture and the technical aspects of personal computing for a variety of publications. Stokes holds a degree in computer engineering from Louisiana State University and two advanced degrees in the humanities from Harvard University.
@https://x.com/jonst0kes?s=20 →