Seems like Tyler disagrees that infovores will be among the winners:
"The returns to factual knowledge are falling,"
Also I was confused at this bit from Tyler:
"One striking feature of the new AI systems is that you have to sit down to use them. Think of ChatGPT, Stable Diffusion, and related services as individualized tutors, among their other functions. They can teach you mathematics, history, how to write better and much more. But none of this knowledge is imparted automatically. There is a relative gain for people who are good at sitting down in the chair and staying focused on something. Initiative will become more important as a quality behind success."
I'm not aware of many traditional learning vehicles that don't involve sitting down and paying attention--e.g., school, youtube. I'm not aware of ANY learning vehicles that automatically impart knowledge. Maybe I'm scarce on context here.
You raise a good point there Re: traditional learning being highly dependent on sitting still in a chair. I ended up leaving it out of the piece, but I disagree with Tyler’s choice to emphasize that aspect.
While initiative and focus are of course important, I think that AI-based learning will be much more conducive to teaching kids who struggle to sit still (such as myself). You can move AI at your own speed (often I prefer faster, but sometimes slower) and it is much more engaging to talk to the bot in your own style and about your own interests than to be held captive by a lecturer’s rigid agenda (particularly if they’re bad at teaching).
If I were to rewrite that passage from Tyler, I would emphasize obsessiveness and curiosity as attributes that will increase in importance since those traits produce a lot of focus and initiative in the right contexts.
In a more general sense, peer teaching (dialogue rather than blind listening) and written work (field observational rather than essays) are all good methods for those that are more neurodivergent in the autistic or bipolar direction, with tolerance for systematizing and a weakened emphasis on social rhetoric.
I want an AI mentor that I can walk with through the woods. He already has a good idea of who I am because he’s read my entire internet history. He’ll strike up a conversation and learn everything else about me as we walk. He’ll learn that I’m not very good at math proofs, and I’ve never had the patience for them. During our walk we decide that we’ll work on proofs. He tells me exactly what I need to hear, exactly when I need to hear it. He senses that I’m getting frustrated, and immediately changes to the next most likely method to get me to understand. I never need to sit down at a chair. I never get frustrated while learning. My attention span is as maximized as any AI fueled TikTok binge. AI can be the mentor that teaches anybody anything, regardless of their ability to sit down and pay attention.
Factual knowledge is a also a way to signal your interest or "smarts" in a subject. For instance, any hockey stats is easily available on google, let alone ChatGPT, yet nobody would hire a hockey analyst that couldn't tell you on the spot who scored the most points in NHL history, even if he had the best analytical mind. We humans are dumb, and we need signals that the people in front of us are not dumb. Factual knowledge is that signal for a lot of intellectual activities.
Strongly disagree that AI will make conscientiousness less important in the short run. Subtle errors that sound convincing can have large negative consequences and are an important danger of current large language models. Also, generating a schedule is something like 5% of conscientiousness, obviously the most important part of consciousness is actually carrying a task out!
Another thing on your point, Willy: I had ChatGPT write a short legal brief on an issue relevant to my practice. It wrote a passable thumbnail summary of the legal doctrine, but it threw in phrases that were false. It would say something like "In the case of Smith v. Jones, the Supreme Court held that an X can be liable for Y if A or B." The statement "an X can be liable for Y if A or B" was roughly correct, but the Smith v. Jones case absolutely did not make that holding. In fact, it had nothing to do with that area of law, and the words/themes X, Y, A, and B either did not appear in the Smith v. Jones opinion or appeared in a context unrelated to the other key words/themes.
This is why the AI is more agreeable than it is conscientious. It makes you feel good by sounding coherent and rhetorically acceptable, but the thoughts can be seen as non-coherent once you see the big picture.
Writing a first draft requires more consciensciousness than editing it afterwards. Especially, since editing would mostly consist of noticing errors and delegating the actual correction process to the A.I.
To mess with the logic of things, ChatGPT has been tested with IQ tests, and it is now known that it is very literate and rhetoric-strong (like a professor or bureaucrat) but never logically sound (as dumb as a 13 year old). They are meticulous in spotting errors of thought yet cannot create coherent thought on its own. Therefore we can assume that it is non-argumentatively agreeable and extraverted (and maybe emotionally stable too since it has too much confidence), but not conscientious or open/intelligent (or even honest/humble?). An ESFP if you will.
If this is the case, it weeds out "BS writers", as more nuanced creative thought can be prioritized over those that "sells" an idea that is patently obvious and hollow. Con-men would be unemployed as ChatGPT, like other BERT-esque models, can also be used to spot spam articles or sort biased articles by stance and information referenced. Could it be that the fuss over AI is mainly from those that are tasked to shape public opinion, rather than those that are truth finders?
This is pretty easily disprovable, and actually says more about what beliefs fall into the other categories. "Conservative" politics depend more on a variety of religious, superstitious, and contradicting beliefs that aren't frequently re-represented in the texts the model was built on.
Basically, a language model is going to be biased towards the most coherent texts.
But at least your thoughts on Joe Rogan are pretty solid.
AI is the ultimate infovore
Seems like Tyler disagrees that infovores will be among the winners:
"The returns to factual knowledge are falling,"
Also I was confused at this bit from Tyler:
"One striking feature of the new AI systems is that you have to sit down to use them. Think of ChatGPT, Stable Diffusion, and related services as individualized tutors, among their other functions. They can teach you mathematics, history, how to write better and much more. But none of this knowledge is imparted automatically. There is a relative gain for people who are good at sitting down in the chair and staying focused on something. Initiative will become more important as a quality behind success."
I'm not aware of many traditional learning vehicles that don't involve sitting down and paying attention--e.g., school, youtube. I'm not aware of ANY learning vehicles that automatically impart knowledge. Maybe I'm scarce on context here.
You raise a good point there Re: traditional learning being highly dependent on sitting still in a chair. I ended up leaving it out of the piece, but I disagree with Tyler’s choice to emphasize that aspect.
While initiative and focus are of course important, I think that AI-based learning will be much more conducive to teaching kids who struggle to sit still (such as myself). You can move AI at your own speed (often I prefer faster, but sometimes slower) and it is much more engaging to talk to the bot in your own style and about your own interests than to be held captive by a lecturer’s rigid agenda (particularly if they’re bad at teaching).
If I were to rewrite that passage from Tyler, I would emphasize obsessiveness and curiosity as attributes that will increase in importance since those traits produce a lot of focus and initiative in the right contexts.
This is in essence the Montessori mode of education (not sure about Waldorf or Harkness though) https://en.wikipedia.org/wiki/Montessori_education
In a more general sense, peer teaching (dialogue rather than blind listening) and written work (field observational rather than essays) are all good methods for those that are more neurodivergent in the autistic or bipolar direction, with tolerance for systematizing and a weakened emphasis on social rhetoric.
I want an AI mentor that I can walk with through the woods. He already has a good idea of who I am because he’s read my entire internet history. He’ll strike up a conversation and learn everything else about me as we walk. He’ll learn that I’m not very good at math proofs, and I’ve never had the patience for them. During our walk we decide that we’ll work on proofs. He tells me exactly what I need to hear, exactly when I need to hear it. He senses that I’m getting frustrated, and immediately changes to the next most likely method to get me to understand. I never need to sit down at a chair. I never get frustrated while learning. My attention span is as maximized as any AI fueled TikTok binge. AI can be the mentor that teaches anybody anything, regardless of their ability to sit down and pay attention.
Friedrich Nietzsche said that all truly great thoughts were conceived while walking. Perhaps AI FN can be your mentor :)
Factual knowledge is a also a way to signal your interest or "smarts" in a subject. For instance, any hockey stats is easily available on google, let alone ChatGPT, yet nobody would hire a hockey analyst that couldn't tell you on the spot who scored the most points in NHL history, even if he had the best analytical mind. We humans are dumb, and we need signals that the people in front of us are not dumb. Factual knowledge is that signal for a lot of intellectual activities.
Strongly disagree that AI will make conscientiousness less important in the short run. Subtle errors that sound convincing can have large negative consequences and are an important danger of current large language models. Also, generating a schedule is something like 5% of conscientiousness, obviously the most important part of consciousness is actually carrying a task out!
Another thing on your point, Willy: I had ChatGPT write a short legal brief on an issue relevant to my practice. It wrote a passable thumbnail summary of the legal doctrine, but it threw in phrases that were false. It would say something like "In the case of Smith v. Jones, the Supreme Court held that an X can be liable for Y if A or B." The statement "an X can be liable for Y if A or B" was roughly correct, but the Smith v. Jones case absolutely did not make that holding. In fact, it had nothing to do with that area of law, and the words/themes X, Y, A, and B either did not appear in the Smith v. Jones opinion or appeared in a context unrelated to the other key words/themes.
This perfectly encapsulates the sentiment "ChatGPT is the ultimate con-man/grifter". Rhetorically sound but logically false.
This is why the AI is more agreeable than it is conscientious. It makes you feel good by sounding coherent and rhetorically acceptable, but the thoughts can be seen as non-coherent once you see the big picture.
Writing a first draft requires more consciensciousness than editing it afterwards. Especially, since editing would mostly consist of noticing errors and delegating the actual correction process to the A.I.
If planning was the same as executing, I'd be, well, I'd be something.
To mess with the logic of things, ChatGPT has been tested with IQ tests, and it is now known that it is very literate and rhetoric-strong (like a professor or bureaucrat) but never logically sound (as dumb as a 13 year old). They are meticulous in spotting errors of thought yet cannot create coherent thought on its own. Therefore we can assume that it is non-argumentatively agreeable and extraverted (and maybe emotionally stable too since it has too much confidence), but not conscientious or open/intelligent (or even honest/humble?). An ESFP if you will.
If this is the case, it weeds out "BS writers", as more nuanced creative thought can be prioritized over those that "sells" an idea that is patently obvious and hollow. Con-men would be unemployed as ChatGPT, like other BERT-esque models, can also be used to spot spam articles or sort biased articles by stance and information referenced. Could it be that the fuss over AI is mainly from those that are tasked to shape public opinion, rather than those that are truth finders?
References to references: https://bradnbutter.substack.com/p/porn-martyrs-cyborgs-part-1
"Chat GPT is a moderate liberal Democrat."
This is pretty easily disprovable, and actually says more about what beliefs fall into the other categories. "Conservative" politics depend more on a variety of religious, superstitious, and contradicting beliefs that aren't frequently re-represented in the texts the model was built on.
Basically, a language model is going to be biased towards the most coherent texts.
But at least your thoughts on Joe Rogan are pretty solid.