29 Comments

I’ve been mentoring a few people. Two situations I frequently encounter, which AI mentors can solve:

1. A learner completes a task, then doesn’t know what to do next. The solution to this problem is already a feature of formal schooling, and it can be automated by AI. Too many self directed learners end up getting lost in a confusing domain because they just don’t know where to look to do the next thing. Randomly pinballing around is risky, and mentorship can save time and frustration which ends up making all the difference for many students.

2. A learner has a specific question that’s blocking their learning. “My code isn’t compiling and the error makes no sense”. Well, you could just google it and get the answer. Easy, right? That works in the immediate term. But a mentor interprets those questions and recognizes that the learner actually has a systemic misunderstanding. Then they can create a practice regiment to cure the disease and not the symptom. Google and textbooks really struggle on this one, but AI can solve it at scale!

A final thought. There’s a bell graph of learners. Far right are gifted self-starters and self-teachers, of which there are few. Center are most students, who frequently need mentorship. Far left are lost causes. AI mentorship will lift every type of student, but I suspect it will have the biggest influence on center type students. So huge possibilities for AI mentorship!

Expand full comment
author

Great insights as usual. I would add that there are likely many students who *appear* to be mediocre, but are actually diamonds in the rough!

If you take seriously a multiplicative model of talent, in which a person with 3 of 4 essential qualities for outstanding success can vastly underperform potential, then adding just one more essential thing via AI mentoring can make a tremendous difference.

Expand full comment
Jan 29, 2023Liked by Age of Infovores

I also came here to say something a little similar in that that this kind of mentorship only really works for self motivated learners on the right side of the learning curve.

Because learning difficult subjects is actually hard, so it will require effort and unpleasantness and even the best teacher can only take you somewhere you want to go.

As a non academic example, I’m learning guitar. Teachers help a lot and good teachers in particular are very helpful. But in the end I still have to suffer through the progression from crappy to proficient until I eventually get to the level where I have enough proficiency that I’m having fun with the work of learning.

I think maybe we are going to see a lot of really smart and self motivated people who skyrocket ahead with an AI based, superior learning system.

The people in the middle might do a little better completing the learning they are required to do anyway because the computer teaches better / exactly for each individual. But it won’t magically make them want to be the self motivated superstar just because it’s available to them.

Expand full comment
Jan 29, 2023Liked by Age of Infovores

Great points! But I maybe semantically differ on one idea: the best teachers impart a love for the subject itself. Somehow, great teachers are able to make the destination so appealing that the journey is worth it.

Expand full comment

This is a thought provoking essay. I agree with some points (not nearly enough mentoring happening) and I think I disagree with others (AI might be good at it.) I need to think on it some more, however; I might change my mind before I get a response essay written.

Expand full comment
author

Would love to read your response! There are many possible futures I can envision for AI mentorship, not all of which are so sanguine.

Expand full comment
Jan 29, 2023Liked by Age of Infovores

Others have done this as well, but more exploration rather than "advice giving". https://etiennefd.substack.com/p/why-do-i-feel-inadequate-at-coming

In other news, ChatGPT is good at confidently talking but less so on thinking. https://threadreaderapp.com/thread/1598430479878856737.html https://davidrozado.substack.com/p/what-is-the-iq-of-chatgpt

Once the "idea guy" has been automated, the rest is just iterative experimentation and high-speed learning. There are some human elements in Observe-Orient-Decide-Act (OODA). "Observe" = data analysis, "Decide" = ChatGPT recommendations.

BUT Orient and Act are strictly human endeavors since robots cannot make good judgements OR execute on its own behalf. https://graymirror.substack.com/p/there-is-no-ai-risk

Expand full comment
author

This first link from AoWaM is very interesting, thanks! Reminds me in some ways of this other piece I read today from Ethan Mollick—maybe creativity will eventually become less important for humans along some dimensions as they outsource to AI? Or maybe AI will just help young researchers learn how to be creative more easily.

https://oneusefulthing.substack.com/p/a-prosthesis-for-imagination-using

Expand full comment
Jan 31, 2023Liked by Age of Infovores

The latter is hopefully more true, since if creativity becomes a common good (e.g. design thinking), it gets industrialized and streamlined to a better future. However if it becomes a "Veblen good" (e.g. NFTs and collectables) it becomes less important and people move on to more intellectual endeavors that are harder to forge. The Ribbonfarm prediction is that menial labor and strong "knowledge work" both follow common good logic, BUT "creativity" violates this law as status-seeking luxury good. https://archive.ph/MAhxA

I can't really put a finger on why people place such an importance on creativity when words and pictures are not that valuable. On words as a "hack" of intelligence https://bradnbutter.substack.com/p/porn-martyrs-cyborgs-part-1 On symbolic obsession as porn addiction https://bradnbutter.substack.com/p/porn-martyrs-cyborgs-part-2

Expand full comment

Prove used to mean test, so the exception that proves the rule does so by testing it, like proving your case by testing it in court, or proving your strength by testing it in a trial. You still see this usage with proving bread which is a test that it will rise in the oven.

Expand full comment
author

Thank you for mentoring me :)

Expand full comment

Indeed. I am curious as to what ChatGPT's answer was, the use of "exception that proves the rule" seems to have changed a great deal from the 'testing that demonstrates it is solid' to "the weird case that falls outside the rule, but that is ok, for some reason."

Expand full comment
author

That’s how Bryan Caplan seems to use it, but I like Henry’s answer better.

GPT’s answer was more consistent with the latter—“the phrase is often used to indicate that something that appears to be an exception actually serves to reinforce the rule that it appears to break.”

Expand full comment

They are two sides of the same coin, I think. By surviving the test of the exception the rule is proved. I think of this as Johnson’s Orchard Maxim:

“Shakespeare never has six lines together without a fault. Perhaps you may find seven, but this does not refute my general assertion. If I come to an orchard, and say there’s no fruit here, and then comes a poring man, who finds two apples and three pears, and tells me, “Sir, you are mistaken, I have found both apples and pears,” I should laugh at him: what would that be to the purpose?.”

Expand full comment

See also: https://www.bartleby.com/81/6014.html

Expand full comment
author

I like that—“the very fact of exceptions proves there must be a rule.”

Expand full comment

Shows the test/prove relationship very nicely.

Expand full comment

That is some pretty bad logic though, at least if applied in general. I mean, if someone says "Well, ____ is an exception" that does imply that they have some rule in mind. But it doesn't mean a rule is true or tested...

Of course, that makes it a good candidate for why people started using the term erroneously :)

Expand full comment

Shows it exists. A rule is looser than a law by definition. Newton had laws not rules eg.

Expand full comment

That makes sense as the transitional point, when people got a little confused about the original meaning and the new one, much like how people butcher "begs the question." That the new meaning is actually directly counter to the original fits that pattern too, amusingly so does Johnson: his statement is strictly incorrect, but also faulty because he doesn't specify the purpose when making the statement! If your purpose was to go to the orchard and harvest a lot of fruit to take to market, sure, "There's no fruit here" is sufficiently close to true. If your purpose is "I gotta get something to eat, I am starving" the statement is terribly far from true. Falling into that "it's close enough, you know what I mean!" trap happens to everyone it seems :)

The modern usage, at least where I have lived for the past 40 odd years, seems to expect that not only is the exception seemingly outside the rule, it is, in fact, outside the rule, and having something that doesn't fit the rule proves the rule is good. Which... I just can't get behind that at all.

Expand full comment

No it’s showing that you don’t have to take statements of rules strictly literally the way you would with laws. A rule of thumb isn’t rendered incorrect by stray examples.

Expand full comment

What is the basis for this distinction between statements of rules and statements of law? You have used this multiple times now, but why rules can be vague and indeterminant but laws are not has never been stated. Are you perhaps using a definition of "rule" that is somewhat different from the common one?

Expand full comment

Yea, it was that second case (or the one ChatGPT returned) that seems to have dominated my entire life as well. It wasn't till I learned more about historical armor making that I realized it had changed quite a bit. It was also when terms like "bullet proof" started to make any kind of sense :)

Expand full comment
Jan 29, 2023Liked by Age of Infovores

I’m going to quibble over definitions here. To me mentorship involves someone helping you make decisions that help you advance along an unknown path. Usually a mentor is someone who has done it before and can both answer questions and, more importantly, give you questions. AI can’t navigate the unknown.

What I’m reading is this post is more about how AI can answer questions, and find and relay good advice, it can also take in every part of your digital life and map it to another example of a successful life you might want to emulate. But it can only navigate what is known

Expand full comment
author

I’m envisioning a kind of Socratic dialogue in response to this…

Socrates: Greetings, friend. Can you tell me more about why you believe an AI mentor cannot navigate the unknown?

Andrew: Well, to me mentorship involves someone helping you make decisions that help you advance along an unknown path. Usually a mentor is someone who has done it before and can both answer questions and, more importantly, give you questions.

Socrates: I see. So you believe that an AI mentor cannot help someone navigate the unknown because it lacks the experience and ability to ask questions. But what if the AI mentor is able to access and analyze vast amounts of data and information that can be used to provide guidance and help someone navigate the unknown?

Andrew: That's true, I suppose an AI mentor could have access to a lot of data and information that could be helpful. But I still don't think it can truly understand and navigate the unknown.

Socrates: I understand your concerns. But consider this: the unknown is not something that can be fully understood by any one individual or even group of individuals. It is constantly changing and evolving. But an AI mentor, with its ability to access and analyze vast knowledge, can help someone navigate the unknown by providing them with perspectives that they may not have considered before. Additionally, through machine learning, it can continuously update and improve its understanding and ability to navigate the unknown.

Andrew: I see what you're saying. It's true that an AI mentor could potentially provide a lot of valuable information and perspectives to help someone navigate the unknown. But I think that only a human mentor can truly understand its complexities.

Socrates: And I would agree that the human touch and understanding is important, but I believe that an AI mentor can complement and enhance the work of human mentors. Together, they can provide a more comprehensive approach to navigating the unknown.

Andrew: I see the point. An AI mentor can be a useful tool to help someone navigate the unknown, but it cannot replace the human touch.

Socrates: Exactly, my friend. The AI mentor can be a valuable asset in the journey of navigating the unknown, but it should be used in conjunction with the guidance of human mentors to obtain the highest benefit.

Expand full comment

It’s a thought provoking conversation (and maybe that’s all that matters) but it’s inaccurate. If I teach a computer to recognize cats and boats, it can do it even better than a human. But if I give it a dog, it will tell me that it’s either a cat or a boat. A computer cannot, by definition, do something outside of its instruction set.

Data, by definition, is information about the past. For the most part, all we need is to extrapolate the past into the future and it will work out pretty well. But we run into problems with AI whenever we run into situations where the future requires a significant deviation from the past. A good mentor would be able to do that (I’ll concede that it’s hard to find even a good human mentor)

But that doesn’t mean there isn’t value in having an AI mentor. There are more times that I need an answer then advice

Expand full comment

Well, there are two benefits to mentorship: advice and encouragement.

I agree that future LLM's will be helpful for giving advice. I have already incorporated ChatGPT into my workflow whenever I am doing any kind of exploratory research/brainstorming--and it's been useful.

I can imagine that in the future, instead of going to your university's career counselor, you go to your friendly neighbourhood large language model. Just submit your transcript and comphrehensive questionaire, and the A.I will tell you what jobs/internships to apply to and what skills you will need to both get and ace the interview.

Good advice is useful and hard to find. But the other benefit of mentorshop--and perhaps the most important element--is encouragement. Tyler Cowen has talked about this in the context of Emergent Ventures. He said that the most important thing emergent venture winners get isn't money or even prestige, but "permission". Permission to do what? Permission to be ambitious, to break the mold, to take risks. And this works because there is a necessary costly signalling component: if you are an Emergent Ventures winner, Tyler has chosen *you* out of all the other bright-eyed and ambitious applicants.

I am skeptical how scalable "the permission to be ambitious" is. It's clearly *somewhat* scalable as we see geographic/culture variations in ambition. Silicon Valley (and America more broadly) encourages--demands, really--naked ambition. Other places are sleepier and more steady. But still: ambition is hard to cultivate deliberately.

Expand full comment
author

Great insights! I see a lot of potential for GPT to replace career counseling, since a lot of times the person behind the desk doesn't know anything about you and has limited insight beyond Google anyway. I also have found that most people are hesitant to clearly specify the most basic relevant facts related to big life decisions because they are afraid to offend. "If I tell this person that Journalism is a low-paying major with more graduates each year than there are total journalist jobs in existence and that job openings are even rarer than that, might they cry or even report me to someone?" Even on the student side of this situation, it is awkward to ask if it pays well when you're talking to someone who might not get paid very much themselves.

On ambition, this is well-taken. It's hard to say how much AI might help to cultivate that attribute. But I do hope that AI at least helps to some degree with not being discouraged by trying to do something difficult you have some initial interest in but don't know how to do very well. Even a little bit of a foothold in a topic can really go a long way toward building skills and self-sufficiency.

Expand full comment