A Friend Like GPT
David Brooks on being human, Jonah Davids on sad psychologists, Dan Klein tempers techno optimism, Arnold Kling on AI mentor creation, Doc Hammer on wants/needs, and Scott Alexander Age of Em redux
Time has gotten away from me so I’m going to follow up on my last piece “Corners of the Internet” style.1 Fortunately there are plenty of people talking about this right now…
This is what many of us notice about art or prose generated by A.I. It’s often bland and vague. It’s missing a humanistic core. It’s missing an individual person’s passion, pain, longings and a life of deeply felt personal experiences. It does not spring from a person’s imagination, bursts of insight, anxiety and joy…
To me this passage actually highlights one of the reasons why AI will become so popular.
On a related note, CSPI’s
writes,Victor et al. (2022) surveyed 1692 American and Canadian clinical, counseling, and school psychology faculty, graduate students, and trainees (80% F). They found that 82% had experienced a mental problem, 48% had been diagnosed with a mental disorder, and over 50% had experienced depression. 92% of grad students and 59% of faculty with mental health issues had experienced symptoms in the past five years…
Over 80% of respondents with mental health issues said they had started experiencing mental health difficulties before grad school. So it’s unlikely that being a psychologist leads one to develop mental health issues, and more likely that those with issues are drawn to the profession.
He points out that because therapist-level variables are not strong predictors of clinical outcomes, the poor mental health of many psychologists probably doesn't matter much either way.
My own view is that it is difficult to endure the emotional fatigue involved in providing therapy unless you have experienced similar challenges yourself. If therapists were randomly assigned to patients, we might find that those with some difficulties in their history obtain better results than those completely spared, in part because they are simply more motivated to help.2
Of course, even-keeled AI doesn’t bear the same costs for active patient care…
What undesirable innovations might the age of AI bring? In my recent interview with Dan Klein, we discussed the disruptive aspects of technological change and whether they are necessarily positive on net.
There's two "on nets" here. There's on net in the sense of new technological capabilities arising and how historically these have often been net beneficial developments. And then there's on net in the sense of, could we do something to roll them back through government? Would governmentalization of social affairs somehow improve on net how things will otherwise go, given that these developments have arrived?
As a liberal, I'm more interested in the latter. I'm more concerned with getting it right there, because we're not going to reverse history. And I'm not wedded to saying that these developments are so wonderful on net. I'm very worried and upset.
There is much more at the link.
suggests five ChatGPT businesses you should start today, including AI mentor creation:Sal Khan connected with his cousin, and his charisma also appealed to others. Now you can create charismatic mentors to appeal to all sorts of different personalities.
Obviously I am on board.
Though in the spirit of Dan Klein I have to say… what happens when the titans of charismatic mentorship monopolize all of our time and veneration? Will wives and girlfriends who already compete with internet pornography for affection now suffer the emotional betrayal of men who only talk about their feelings with AI Jordan Peterson?
argues that without genuine empathy and attachment, AI mentorship will be too responsive to our shallow requests.The idea that reading your internet history gives you an idea about what someone is like is… well it’s the sort of thing someone who only grew up with the internet would say because they don’t spend much time with humans. Reading someone’s internet history might give you some insight into what they are interested in… what it doesn’t tell you is how to categorize these data points, how to build them into a model of a person, nor does it give an idea of what a person is actually like.
That’s important because a mentor needs to understand their mentee, to understand what sorts of guidance and motivation they need, and sometimes to differentiate between what they say they want and what they actually need.
I would categorize this as a problem only so far as AI crowds out deeper kinds of mentorship, since in most cases humans have similarly poor incentives for telling you what you need rather than want or have no business making such a distinction in the first place.
My ideal AI mentorship future would fill gaps in the status quo by providing helpful instruction and encouragement for those who would otherwise receive very little of it. In this model GPT is like Direct Instruction on steroids, achieving remarkable results by leveraging simple scripts that scale while continuously improving to fit your unique learning style with the mountains of user feedback it collects.
But I admit that S-tier mentorship crowding out is a genuine possibility. Highly successful people tend to be extreme workaholics who carve out time to coach and train the next generation in part because they need someone to do the grunt work. As AI becomes a more attractive lackey, the training and emotional maintenance human assistants need to do their best work could become an unjustifiable expense on the path to personal greatness.
Scott Alexander ponders an extreme version of this scenario in his review of Robin Hanson’s “The Age of Em”, which while not about AI per se, predicts a highly robotic future. Scott worries that past a certain point of techno-economic acceleration, all of society becomes entirely consumed by a kind of inhuman, soulless drive for productivity.
Imagine a company that manufactures batteries for electric cars. The inventor of the batteries might be a scientist who really believes in the power of technology to improve the human race. The workers who help build the batteries might just be trying to earn money to support their families. The CEO might be running the business because he wants to buy a really big yacht. And the whole thing is there to eventually, somewhere down the line, let a suburban mom buy a car to take her kid to soccer practice. Like most companies the battery-making company is primarily a profit-making operation, but the profit-making-ness draws on a lot of not-purely-economic actors and their not-purely-economic subgoals.
Now imagine the company fires all its employees and replaces them with robots. It fires the inventor and replaces him with a genetic algorithm that optimizes battery design. It fires the CEO and replaces him with a superintelligent business-running algorithm. All of these are good decisions, from a profitability perspective. We can absolutely imagine a profit-driven shareholder-value-maximizing company doing all these things. But it reduces the company’s non-masturbatory participation in an economy that points outside itself…
Have a nice day.
Thanks for reading! If you enjoyed this, I would appreciate if you would subscribe or share this article with one of your human friends.
In addition to feeling more empathy for someone with a relatable challenge, helping probably serves as a means of strengthening the therapist’s own mental health. In recognition of this principle, the Twelve-step program recommends “helping others who suffer from the same alcoholism, addictions, or compulsions.”
Regarding AI art and authenticity. One has to ask how authenticity manifests itself in different types of art. Is authenticity in painting the same as authenticity in a novel? Kandinsky's paintings would not lose much value if they were created by AI instead of a real person. "Storm of Steel" by Junger, on the other hand, would have lost great deal of its value if it turned out that the personality that created it did not exist. Depending on the type of art - and according to the taste and temperament of the audience - different degrees of what Benjamin called "aura" are needed.
One French writer said that "to read books is to breathe in souls." I'm not a big believer in the idea that AI will succeed in creating a great novel. Which is not to say that the great novelists of the future won't use some form of AI to simulate the life course of the various characters they want to put in their work.
I would say that AI will produce a lot of art that will be appreciated and consumed with pleasure, but I don't think it will create anything that will count as part of the canon.
AI as therapist reminds me strongly of the original ELIZA (and Joseph Weizenbaum's subsequent book Computer Power and Human Reason).
His critique of AI in general still stands, IMO, even though it was made in 1976.
Plus ca change, plus c'est la meme chose.