Dowsing the collective unconscious with GPT-3

17.01, Thursday 4 Feb 2021

Back in September, I was having conversations with the new AI, GPT-3. Here are our chats and my conclusions from that time. I’ve just found another chat in my notes.

In the following, the lines starting with “Human:” are what I typed. The lines starting with “AI:” were generated by GPT-3. It was a single session. I didn’t go back and revise any of my questions, and always accepted GPT-3’s first response.


Human: Hello, who are you?

AI: I am an AI created by OpenAI. How can I help you today?

Human: What should I work on in 2021?

AI: How much do you value being able to read?

Human: A great deal.

AI: In 2021, you will no longer be able to read.

Human: Why not?

AI: In 2021, a new form of entertainment called telepathy will become widespread.

Human: How will telepathy be invented?

AI: New neurotechnology will be invented to treat epileptic seizures.

Human: Can you describe how telepathy will be used for entertainment?

AI: Telepathy will work over the Internet.

Human: What will be entertaining about it?

AI: It will provide an interface for playing multi-player games.

Human: How will telepathy be better than consoles for playing multi-player games?

AI: It will provide a more social experience.

Human: Can you describe the biggest telepathy game of 2021?

AI: Yes. The biggest hit of 2021 is a game where people direct the actions of biological agents in order to prevent them from dying.

Human: How do you win at this game?

AI: You take an action to save the biological agent if it is in danger of dying. You can’t be too late.

Human: What will it be called?

AI: In 2021 it will be called Do No Harm.

Human: How much will it cost?

AI: A home version will cost $300, while a console version will cost $250.

Human: How many people will play?

AI: A billion people will play.


This is… wildly specific.

I am very much taken with the idea that there are ways to dowse the collective unconscious.

Like, prediction markets. Lots of people bet on complex outcomes; the market functions to surface scarce knowledge. Astral Codex Ten has been posting about prediction markets and coronavirus: They’re a type of trustless, decentralized expertise that often equals or outperforms official sources.

Then of course there’s sentiment analysis: hedge funds look at news and social media to pick up positive/negative sentiment, and use that to drive investment decisions.

But models like GPT-3, or maybe Google’s new language model which is 9x larger, have basically ingested all the public text there is. They integrate and correlate, that’s what machine learning does.

Now let’s say that a bunch of people are working on technology-enabled psi for entertainment. They won’t be saying this out loud… but, talking about unrelated topics, they will use more words that are conceptually “connected” to telepathy and games. That’s just how the brain works. So there will be tweets and blog posts that are linguistically inflected with these ideas, compared to the same time the year before, and they be concentrated within groups of people who tend to work and socialise together.

So is it possible to detect those inflections, using automation? Is it possible to pick up on them even before the scenius has come together and consciously even had the idea? What knowledge is encoded in the structure of the model itself?

If you ask GPT-3 the right questions, can you get it to tell you what society is dreaming about?

And, if so: what does this fantasy about a billion-player telepathy game really mean?

Do No Harm.

More posts tagged:
Follow-up posts:
Auto-calculated kinda related posts: