Co-Intelligence: Living and Working with AI

💡 “I believe the cost of getting to know AI—really getting to know AI—is at least three sleepless nights.”

*“*General Purpose Technologies typically have slow adoption, as they require many other technologies to work well. The internet is a great example. While it was born as ARPANET in the late 1960s, it took nearly three decades to achieve general use in the 1990s, with the invention of the web browser, the development of affordable computers, and the growing infrastructure to support high-speed internet.”

ChatGPT reached 100 million users faster than any previous product in history, driven by the fact that it was free to access, available to individuals, and incredibly useful.

Frontier AI models, trained on the largest datasets with the most computing power, seem to do things that their programming should not allow—a concept called emergence. They shouldn’t be able to play chess or demonstrate empathy better than a human, but they do.

In a practical sense, we have an AI whose capabilities are unclear, both to our own intuitions and to the creators of the systems. One that sometimes exceeds our expectations and at other times disappoints us with fabrications.

Co-Intelligence: Living and Working with AI

The AI stores only the weights from its pretraining, not the underlying text it trained on, so it reproduces a work with similar characteristics but not a direct copy of the original pieces

As soon as you start asking an AI chatbot questions about itself, you are beginning a creative writing exercise constrained by the ethical programming of the AI.

As soon as you start asking an AI chatbot questions about itself, you are beginning a creative writing exercise constrained by the ethical programming of the AI.

conclusion, one that is hard for a lot of us to grasp: whatever AI you are using right now is going to be the worst AI you will ever use.

But it is surprisingly close. When interacting with the AI version of me, I had to actually Google the studies that AI-me cited to make sure they were fake because it seemed plausible that I had written about a real study like that. I failed my own Turing Test: I was fooled by an AI of myself to think it was accurately quoting me, when in fact it was making it all up.

And you can’t figure out why an AI is generating a hallucination by asking it. It is not conscious of its own processes. So if you ask it to explain itself, the AI will appear to give you the right answer, but it will have nothing to do with the process that generated the original result.

When faced with the tyranny of the blank page, people are going to push The Button. It is so much easier to start with something than nothing.

The same thing happened in the writing experiment done by economists Shakked Noy and Whitney Zhang from MIT, which we discussed in chapter 5—most participants didn’t even bother editing the AI’s output once it was created for them. It is a problem I see repeatedly when people first use AI: they just paste in the exact question they are asked and let the AI answer it. There is danger in working with AIs—danger that we make ourselves redundant, of course, but also danger that we trust AIs for work too much.

initially banned ChatGPT use, often because of legal concerns. But these bans had a big effect . . . they caused employees to bring their phones into work and access AI from personal devices. While data is hard to come by, I have already met many people at companies where AI is banned who are using this workaround—and those are just the ones willing to admit it! This type of shadow IT use is common in organizations, but it incentivizes workers to keep quiet about their innovations and productivity gains.

Right now, there is some evidence that the workers with the lowest skill levels are benefiting the most from AI, and so might have the most experience in using it, but the picture is still not clear. As a result, companies need to include as much of their organization as possible in their AI agenda, a democratic turn of events that many companies would rather avoid.

the average student tutored one-to-one performed two standard deviations better than students educated in a conventional classroom environment. This means that the average tutored student scored higher than 98 percent of the students in the control group

One study of eleven years of college courses found that when students did their homework in 2008, it improved test grades for 86 percent of them, but it helped only 45 percent of students in 2017. Why? Because over half of students were looking up homework answers on the internet by 2017, so they never got the benefits of homework.

Even before generative AI, 20,000 people in Kenya earned a living writing essays full time.

Additionally, and most important: there is no way to detect whether or not a piece of text is AI-generated. A couple of rounds of prompting remove the ability of any detection system to identify AI writing. Even worse, detectors have high false-positive rates, accusing people (and especially nonnative English speakers) of using AI when they are not. You cannot ask an AI to detect AI writing either—it will just make up an answer. Unless you are doing in-class assignments, there is no accurate way of detecting whether work is human-created.

I have made AI mandatory in all my classes for undergraduates and MBAs at the University of Pennsylvania. Some assignments ask students to “cheat” by having the AI create essays, which they then critique—a sneaky way of getting students to think hard about the work, even if they don’t write

my coauthors and I have published some of the first research: E. R. Mollick and L. Mollick, “New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments” (December 13, 2022). Available at SSRN: https://ssrn.com/abstract=4300783

Next
Next

Cultures of Growth: How the New Science of Mindset Can Transform Individuals, Teams, and Organizations