Thoughts on LLMs

Thoughts on LLMs


A braindump, in no particular order, and with no claims of completeness even in terms of cataloging existing thoughts.

  1. There are numerous psychological factors why we overestimate the performance and utility of LLMs
    • Lack of existing language to understand and define them
    • Pareidolia of mind
    • Chat factor
    • Talking Dog Syndrome
  2. There may be areas in which they are “useful” but those areas are small, limited, and are poorly defined (in all senses), having ragged and unclear edges
    • “Usefulness is not enough”, if that utility comes at excessive (and likely externalized) cost.
    • LLMs provide the ability to “move fast and break things” without actually leading to an outcome of long-term use.
    • There is massive industry / VC incentive to overhype these “useful cases”
  3. Even in areas of utility they are subject to stochastically unpredictable, large errors
    • Accuracy / reliability never seem to exceed 80% on non-trivial tasks - a rate that would disqualify any other “assistive measure”.
    • When they produce bad output, the errors can be huge & potentially deadly
    • As such, sharing that output with authority will frequently be dangerous, unethical, and/or a huge reputational risk
  4. The substitution of LLMs for human thought impacts learning, cognitive function and skills development, while finding the errors in their work requires more skill than doing the work in the first place.
    • This applies both in pedagogical and professional environments
    • It is very clear that once the work is produced, the appetite for checking it is limited
    • LLMs steal joy, reducing creative tasks to the drudgery of housekeeping their errors
  5. LLMs have no concepts of “truth” or “harm”, because they have no concepts.
  6. LLMs are not intelligence; they sidestep the Turing Test by “copying the answers”.
  7. LLMs will never “threaten” us, because they have no will, even if we try to give them “agency” (itself an overloaded word here).
    • Agency is both the ability to have independent thought, and the ability to act independently. “Agentic AIs” are LLMs that have no volition of their own, but have been given authority to act without oversight.
    • Under this description, the script of https://xkcd.com/576/ is “agentic”.
  8. LLMs are not AI, in the sense of classical AI research, nor an equivalence of the human (or any other) mind, per cognitive science. They are statistical tricks.
  9. LLMs do not hallucinate, they bullshit, and they do so equally when producing correct or incorrect outputs.
  10. LLM technology will probably not advance significantly over its current state. Any “true” AI will come from very different technologies.
    • “Imminent singularity” or “machine superintelligence” claims are bunk; god-like machines will not solve our global problems
  11. While LLMs will not threaten us, their use and abuse by humans does:
    • Generation and implementation of bad policy
    • Use of deepfakes to promote misinformation
    • Threats to human livelihoods by their replacement with inferior solutions
    • Impacts on human critical thinking skills ease abuse by authoritarians
    • Concentrations of power & wealth
    • Environmental impacts
    • Use to defer action on genuine global issues, because “Tech will save us”.
  12. LLMs cannot be accountable, and are used to obscure accountability for others.
  13. The environmental impacts of LLMs are at best unclear, and deliberately so. They may already be, and likely will become, significant.
  14. LLMs, being inherently averaging, will never produce true innovation or insight; at best they can produce eccentricity, a factor of error not of novelty.
  15. By passing composite information as neutral, LLMs will always serve to propagate existing biases (sometimes through the express intent of their operators).
  16. It is not coincidence that many of the primary proponents of LLMs have atrocious political and social views; it is clear that many see them as a way to accumulate wealth and power rather than further knowledge or better society.
  17. The black-box nature of LLM mechanisms and datasets produces significant security and privacy issues.
  18. The personal costs of LLM interaction and dependency have already been proven significant, leading to deaths.
    • Both in terms of harm to the user, and to wider society (in terms of radicalization)

References