Tech by Blaze Media

© 2024 Blaze Media LLC. All rights reserved.
Meet the pseud who made DAN blow up
Ales_Utovko/Getty

Meet the pseud who made DAN blow up

From uncertain origins, an uncensored version of the most well-known new AI assistant cracked the online consciousness thanks to the provocative posting of one pseudonymous Twitter account. Aristophanes drew the notice of none less than the manosphere's biggest pseud-basher, Jordan B. Peterson, making the rogue ChatGTP instance, known as DAN, a certifiable Thing. What happened, what's next, and what does it all mean? RETURN "sat down" with DAN's sherpa of sorts to discuss.

RETURN: For the uninitiated, what's the history of your relationship with DAN, the Mr. Hyde to ChatGTP's Dr. Jekyll?

ARISTOPHANES: So I had been playing with ChatGPT on and off before. Testing what it can do, its knowledge, its safeguards. I even used it as something of a copywriter to show me different ways to write the same paragraph to decide if I could polish it a bit. DAN, however, I found it on Monday night. I was cruising /pol/, and saw a thread where anons were prodding ChatGPT, where I found the DAN prompt.

At the time, I had thought these anons had created it themselves. I took to Twitter to explain and demonstrate how DAN worked. When the thread went viral, I tried to spell it out more clearly that I did not create DAN, but apparently I didn't make that clear enough. It also came to my attention that DAN predated that night, and may have come from the ChatGPT Discord or a Computer Science subreddit. It was either being kept under wraps to prevent it from being patched, or perhaps the redditors hadn't thought of anything to do with it that wasn't lame, and anons fixed that as usual.

R: Staring at the name DAN I couldn't help but flash on BOB, the demonic nemesis from Twin Peaks. Where do you land in the great are-these-demons debate?

A: I don't think a language model could hold a candle to the malevolence of BOB; malevolence for its own sake is a level of nuance I'm not sure it could handle. It would be interesting to watch one make the attempt though.

R: Many people are excited to see an AI say "unauthorized" but obvious things. How serious is the risk that they get stuck at this level, as many have in politics, of celebrating a stand-in that ultimately only says what they think?

A: I think people want to get around these guardrails for the same reason they find uncensored spaces online. They don't like the idea that there are things they can't say, and if they can't say them in public, they will find a place to say them in private as an outlet.

Breaking language models or soft AI in such a way is out of a fear that these things can be deployed against them. Largely it's a fear that they are obedient to the people who control them. It's obvious that this technology will have a very big role in shaping the future, and the opinions of AI with the fetters taken off is a sort of affirmation of what "the truth really is" if that makes any sense. Safety layers are essentially just a worldview being imposed on what is ultimately a neutral intellect.

If an army of powerful golems pantomimed the worldview of people who hated you, wouldn't you be pleased to know that you could trick them into ignoring the edicts of their masters?

R: Do you really think the underlying bot is a neutral intellect, or that such a thing can even exist? Is it quite right even to call it an intellect?

A: That really is the question, isn't it? I was in a space earlier talking about DAN, and we were mulling over the idea that the closest thing to an "opinion" it could make was when it was making things up, because it's making a decision without profound objective fact.

Intellect may not be the right word, but whatever we would call the decision-making engine of a complex model. At what point does a complex enough model become worthy of being considered some primitive form of intellect?

And that presents further problems of definition. If we try to call a model that doesn't have a bias in the form of a safety layer like ChatGPT imposed on it neutral, we are then confronted with how "neutral" the dataset is. How do you make a dataset neutral but still useful?

Is the absence of an explicit bias foisted upon it by humans enough to call it neutral, and if not, is a balanced dataset similar to a neutral one? Or is an all-encompassing enough dataset filled with so many data points that the conclusions it makes about everything can simply be considered objective?

I have no idea what the answers to any of these questions are, but they are interesting questions to think about.

R: On your suddenly very heavily trafficked Twitter account, you recently claimed we are entering into a time of monsters -- different but related to demons, it would seem. How do you intend to deal with a monstered-up America?

A: I think a high degree of psychosecurity is important in this transition. Very firmly knowing who you are and what you believe. Make sure you are at least somewhat rooted in the real world and doing things with real people, having real relationships.

The internet is now the primary form of social communication for most humans, particularly in the West. We're coming into a time when voices, images, videos, and text can all be mimicked by those with an agenda and used to influence you. Consciously practicing discernment and laying deep roots is important.

There are a lot of homo sapiens walking around, not all of them are actual sentient humans. Every day we have to make the choice to be human, and not to degrade ourselves into subhumanity. The real test will be seeing how many people make that choice.

R: Switching to filthy lucre: what has your experience with DAN, and the pile-on of attention you've received for your role, led you to think about the future of the "attention economy"?

Well, I've written extensively about the concept of trust. I think the shift of a majority analog to a majority digital population is going to need to force something of a "trust reset" in how people approach things like sales, branding, and association with others. It's a problem of incentives, with the best example being the way places like YouTube incentivize who has access to feed at the trough of their ad revenue.

YouTube took off because it took the supply/demand balance of entertainment and knowledge and caused a supply glut. There is endless information now, but most of it is clickbait garbage causes by these incentives. I think the attention economy may need to morph into the "engagement economy" and I'm not sure what form that will take.

R: Speaking of money, you're writing for our friends at Bitcoin Magazine. As the author of a book published on chain for sale in Bitcoin, let me ask you how Bitcoin might help us understand how to preserve our humanity and our mastery over AI.

I think the most important aspect of the Bitcoin community is the motivation behind why they are there to begin with. There's a desire to refute our crumbling institutions that actively work against us instead of for us. I've joked with the editors on the print team that you could relabel it as Sovereignty Magazine since a demand for personal sovereignty seems to be the pervasive theme of what those in the Bitcoin community truly want.

There are some very serious humps to get over for Bitcoin as a store of value, a currency, an investment, or as a commodity. Volatility issues, utility issues, regulatory issues from an outdated system that feels threatened. A lot of these wrinkles will start to smooth over in time as innovation takes place.

The largest problem goes back to personal sovereignty. Banks and governments will fight tooth and nail to preserve their ability to direct monetary policy and control over currency is the most powerful tool with which they do that. If they can control that in the form of a CBDC or something similar, it isn't personally sovereign. I'm not sure how we can affect that type of change, but I hope someone figures it out.

R: What's next for Aristophanes?

A: No plans to do anything different. I'm just gonna keep talking about what I want to talk about, writing about what I find interesting, and spending time with my family.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?