15 Comments
Dec 17, 2022Liked by Age of Infovores

AI is the ultimate infovore

Expand full comment
Dec 17, 2022Liked by Age of Infovores

Seems like Tyler disagrees that infovores will be among the winners:

"The returns to factual knowledge are falling,"

Also I was confused at this bit from Tyler:

"One striking feature of the new AI systems is that you have to sit down to use them. Think of ChatGPT, Stable Diffusion, and related services as individualized tutors, among their other functions. They can teach you mathematics, history, how to write better and much more. But none of this knowledge is imparted automatically. There is a relative gain for people who are good at sitting down in the chair and staying focused on something. Initiative will become more important as a quality behind success."

I'm not aware of many traditional learning vehicles that don't involve sitting down and paying attention--e.g., school, youtube. I'm not aware of ANY learning vehicles that automatically impart knowledge. Maybe I'm scarce on context here.

Expand full comment

Strongly disagree that AI will make conscientiousness less important in the short run. Subtle errors that sound convincing can have large negative consequences and are an important danger of current large language models. Also, generating a schedule is something like 5% of conscientiousness, obviously the most important part of consciousness is actually carrying a task out!

Expand full comment

To mess with the logic of things, ChatGPT has been tested with IQ tests, and it is now known that it is very literate and rhetoric-strong (like a professor or bureaucrat) but never logically sound (as dumb as a 13 year old). They are meticulous in spotting errors of thought yet cannot create coherent thought on its own. Therefore we can assume that it is non-argumentatively agreeable and extraverted (and maybe emotionally stable too since it has too much confidence), but not conscientious or open/intelligent (or even honest/humble?). An ESFP if you will.

If this is the case, it weeds out "BS writers", as more nuanced creative thought can be prioritized over those that "sells" an idea that is patently obvious and hollow. Con-men would be unemployed as ChatGPT, like other BERT-esque models, can also be used to spot spam articles or sort biased articles by stance and information referenced. Could it be that the fuss over AI is mainly from those that are tasked to shape public opinion, rather than those that are truth finders?

References to references: https://bradnbutter.substack.com/p/porn-martyrs-cyborgs-part-1

Expand full comment

"Chat GPT is a moderate liberal Democrat."

This is pretty easily disprovable, and actually says more about what beliefs fall into the other categories. "Conservative" politics depend more on a variety of religious, superstitious, and contradicting beliefs that aren't frequently re-represented in the texts the model was built on.

Basically, a language model is going to be biased towards the most coherent texts.

But at least your thoughts on Joe Rogan are pretty solid.

Expand full comment