馃摎 Personal bits of knowledge

Artificial Intelligence Models#

  • LLMs build internal [[Knowledge Graphs]] in their network layers.
  • LLMs shine in the kinds of situations where "good enough is good enough".
  • Classic ML system where humans are designing how the information is organized (feature engineering, linking, graph building) scale poorly (the bitter lesson). LLMs are able to learn how to organize the information from the data itself.
  • LLMs may not yet have human-level depth, but they already have vastly superhuman breadth.
  • [[Prompt Engineering|Learning to prompt]] is similar to learning to search in a search engine (you have to develop a sense of how and what to search for).
  • LLMs have encyclopedic knowledge but suffer from hallucinations, jagged intelligence, and "amnesia" (no persistent memory).
  • AI tools amplify existing expertise. The more skills and experience you have on a topic, the faster and better the results you can get from working with LLMs on that topic.
  • LLMs are useful when exploiting the asymmetry between coming up with an answer and verifying the answer (similar to how a sudoku is difficult to solve, but it's easy to verify that a solution is correct).
  • LLMs are good at the things that computers are bad at, and bad at the things that computers are good at. Also good at things that don't have wrong answers.
  • Context is king. Managing the context window effectively is crucial for getting good results.
  • LLMs amplify existing expertise rather than replacing it.
  • Be aware of training cut-off dates when using LLMs.
  • "AIs" can be dangerous in under-specified environments (e.g: pausing games to last longer in the level) but those are the places where we will use them most. If something is well specified, there might be better solutions/optimizations (maths, code, ...).
  • When the main purpose of writing is to demonstrate your thinking (building trust, applying for a job), don't use LLM output. Use LLMs when you need to communicate info, or do admin stuff, where the person really just wants info and doesn't need to be convinced "how you think". LLMs are good at writing but bad at thinking.
    • LLMs are helpful when you want the output/result and don't need to do the work yourself (e.g: going to the gym doesn't work if the weights are automatically lifted).
    • Personal communication and writing as trust and self expression. Rewriting with LLMs changes meaning, blurs authorship, erodes voice.
  • Don't outsoutce thinking. That means tasks that:
    • Build complex tacit knowledge you'll need for navigating the world in the future.
    • Are an expression of care and presence for someone else.
    • Are a valuable experience on its own.
    • Are deceptive to fake.
  • LLMs as "stateless functions". Fixed weights, no updating. LLMs are in-context learners.
  • LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro). Generalists might increasingly outperform specialists.
    • The skill you want to build is the ability to understand problems and have some concept of how to solve them.
    • Knowing how the style is named makes prompting easier.

Use Cases#

Design Styles#

Some spells styles to try when designing dashboards, UIs, or anything else.

  • Datasheet
  • High-contrast
  • Monospaced fonts
  • Minimal and utilitarian layout
  • Retro control-panel vibe
  • NeoTech
  • Industrial retrofuture
  • Techno brutalism
  • Neo-Brutalism
  • Editorial Minimalism
  • Swiss / International Typographic Style
  • Text-first, code-adjacent feel
  • Sharp rectangles, thin borders
  • Terminal-inspired developer minimalism
  • Fieldset + legend pattern

Resources#

Tools#

FrontEnds#

Benchmarks#