Snippets
Table of Contents
1. Tim Minchin - 9 Life Lessons
https://www.youtube.com/watch?v=FJ__a4qVE_g
- You don't have to have a dream.
- Don't seek happiness.
- Remember it's all luck.
- Exercise.
- Be hard on your opinions.
- Be a teacher.
- Define yourself by what you love.
- Respect people with less power than you.
- Don't rush.
2. Using AI slows down experienced Developers
Developers thought they would be sped up by ~20% but they were actually slowed down by 20%. [twitter]
Causes:
- The task were in Developer's own repository. So they were very much familiar with the code base.
- Large and complex repositories
- Low AI reliability. Developers accept <44% of AI generations
- AI doesn't uitlize implicit repository context
3. Toyota retains manual labor to improve automation
From twitter:
To maintain the ability to do the job yourself, you need to actually do it yourself at least from time to time.
See also: Toyota deliberately retaining manual labor in certain processes as a calculated method to deeply understand the work, identify inefficiencies, and ultimately create more effective and intelligent automation.
4. io_uring
"Lord of the io_uring" [https://unixism.net/loti/index.html] is an excellent introduction to how to use io_uring.
5. GPU Programming Examples
https://github.com/Vincent-Therrien/gpu-arena/blob/main/readme.rst
This repo has examples on how to use GPU for compute and graphics using various technologies
- OpenGL
- Vulkan
- Metal
- OpenCL
- DirectX
- WebGPU
- CUDA
- SYCL
- Triton
- OpenMP
- AcceleratedKernels.jl
6. Statistical Machine Learning
- Models may be probabilistic or deterministic (e.g., SVMs are statistical but not explicitly probabilistic).
- Includes:
- Linear regression, logistic regression
- Kernel methods (SVM, Gaussian processes)
- Lasso, ridge regression
- Classical statistical inference methods
6.1. Relation with Probabilistic ML
Probabilistic Modelling is a subset of Statistical ML.
Aspect | Statistical ML | Probabilistic ML |
---|---|---|
Model form | Can be probabilistic or deterministic | Always probabilistic |
Goal | Predictive accuracy & statistical guarantees | Probabilistic reasoning & uncertainty quant. |
Parameters | Often point estimates (MLE, MAP) | Distributions over parameters |
Inference | Estimation, hypothesis testing, model selecttion | Bayesian inference (exact or approximate) |
Examples | SVM, Lasso, logistic regression | HMM, LDA, Bayesian neural networks |
7. MuZero: End-to-End Value Function prediction instead of Model Learning
Reinforcement learning may be subdivided into two principal categories: model-based, and model-free. Model-based RL constructs, as an intermediate step, a model of the environment. Classically, this model is represented by a Markov-decision process (MDP) consisting of two components: a state transition model, predicting the next state, and a reward model, predicting the expected reward during that transition.
In large or partially observed environments, the algorithm must first construct the state representation that the model should predict.
This tripartite separation between representation learning, model learning, and planning is potentially problematic since the agent is not able to optimize its representation or model for the purpose of effective planning, so that, for example modeling errors may compound during planning.
A quite different approach to model-based RL has recently been developed, focused end-to-end on predicting the value function. The main idea of these methods is to construct an abstract MDP model such that planning in the abstract MDP is equivalent to planning in the real environment. This equivalence is achieved by ensuring value equivalence, i.e. that, starting from the same real state, the cumulative reward of a trajectory through the abstract MDP matches the cumulative reward of a trajectory in the real environment.
Without any constraints on the semantics of the hidden state, the agent can invent, internally the rules or dynamics taht lead to most accurate planning.
There is no direct constraint or requirement for the hidden state to capture all information necessary to reconstruct the original observation, drastically reducing the amount of information the model has to maintain and predict; nor is there any requirement for the hidden state to match the unknown, true state of the environment; nor any other constraints on the semantics of state. Instead, the hidden states are free to represent state in whatever way is relevant to predicting current and future values and policies. Intuitively, the agent can invent, internally, the rules or dynamics that lead to most accurate planning.