2025-07-12

Snippets

Table of Contents

1. Tim Minchin - 9 Life Lessons

https://www.youtube.com/watch?v=FJ__a4qVE_g

  • You don't have to have a dream.
  • Don't seek happiness.
  • Remember it's all luck.
  • Exercise.
  • Be hard on your opinions.
  • Be a teacher.
  • Define yourself by what you love.
  • Respect people with less power than you.
  • Don't rush.

2. Using AI slows down experienced Developers

Developers thought they would be sped up by ~20% but they were actually slowed down by 20%. [twitter]

Causes:

  • The task were in Developer's own repository. So they were very much familiar with the code base.
  • Large and complex repositories
  • Low AI reliability. Developers accept <44% of AI generations
  • AI doesn't uitlize implicit repository context

3. Toyota retains manual labor to improve automation

From twitter:

To maintain the ability to do the job yourself, you need to actually do it yourself at least from time to time.

See also: Toyota deliberately retaining manual labor in certain processes as a calculated method to deeply understand the work, identify inefficiencies, and ultimately create more effective and intelligent automation.

4. io_uring

"Lord of the io_uring" [https://unixism.net/loti/index.html] is an excellent introduction to how to use io_uring.

5. GPU Programming Examples

https://github.com/Vincent-Therrien/gpu-arena/blob/main/readme.rst

This repo has examples on how to use GPU for compute and graphics using various technologies

  • OpenGL
  • Vulkan
  • Metal
  • OpenCL
  • DirectX
  • WebGPU
  • CUDA
  • SYCL
  • Triton
  • OpenMP
  • AcceleratedKernels.jl

6. Statistical Machine Learning

  • Models may be probabilistic or deterministic (e.g., SVMs are statistical but not explicitly probabilistic).
  • Includes:
    • Linear regression, logistic regression
    • Kernel methods (SVM, Gaussian processes)
    • Lasso, ridge regression
    • Classical statistical inference methods

6.1. Relation with Probabilistic ML

Probabilistic Modelling is a subset of Statistical ML.

Aspect Statistical ML Probabilistic ML
Model form Can be probabilistic or deterministic Always probabilistic
Goal Predictive accuracy & statistical guarantees Probabilistic reasoning & uncertainty quant.
Parameters Often point estimates (MLE, MAP) Distributions over parameters
Inference Estimation, hypothesis testing, model selecttion Bayesian inference (exact or approximate)
Examples SVM, Lasso, logistic regression HMM, LDA, Bayesian neural networks

7. MuZero: End-to-End Value Function prediction instead of Model Learning

In Section 2: Prior Work:

Reinforcement learning may be subdivided into two principal categories: model-based, and model-free. Model-based RL constructs, as an intermediate step, a model of the environment. Classically, this model is represented by a Markov-decision process (MDP) consisting of two components: a state transition model, predicting the next state, and a reward model, predicting the expected reward during that transition.

In large or partially observed environments, the algorithm must first construct the state representation that the model should predict.

This tripartite separation between representation learning, model learning, and planning is potentially problematic since the agent is not able to optimize its representation or model for the purpose of effective planning, so that, for example modeling errors may compound during planning.

A quite different approach to model-based RL has recently been developed, focused end-to-end on predicting the value function. The main idea of these methods is to construct an abstract MDP model such that planning in the abstract MDP is equivalent to planning in the real environment. This equivalence is achieved by ensuring value equivalence, i.e. that, starting from the same real state, the cumulative reward of a trajectory through the abstract MDP matches the cumulative reward of a trajectory in the real environment.

In Introduction (Page 2):

Without any constraints on the semantics of the hidden state, the agent can invent, internally the rules or dynamics taht lead to most accurate planning.

There is no direct constraint or requirement for the hidden state to capture all information necessary to reconstruct the original observation, drastically reducing the amount of information the model has to maintain and predict; nor is there any requirement for the hidden state to match the unknown, true state of the environment; nor any other constraints on the semantics of state. Instead, the hidden states are free to represent state in whatever way is relevant to predicting current and future values and policies. Intuitively, the agent can invent, internally, the rules or dynamics that lead to most accurate planning.

8. Centroid of Map of Nepal

Kaski is at the centroid of map of Nepal.

Centroid of Map of Nepal.pdf

9. Sort tasks by failure rate

From: https://cs.stanford.edu/~jsteinhardt/ResearchasaStochasticDecisionProcess.html

In any project (research, hobby, design, etc.), the objective is either complete the task or fail fast. Whenever you have a project that has lots of uncertain parts, your objective should be to reduce the uncertainity. In effect this translates to doing tasks in the order of decreasing failure rate.

The project maybe a research project for which some independent steps each have their own probability of success. Lets see some scenarios:

  1. Task A (1hr, 80% success), Task B (2 hr, 80% success)

    Do task A (i.e. easiest task first) because both task have same success probability. And thus, we decrease the expected fail time.

  2. Task A (2hr, 95% success), Task B (2 hr, 40% success)

    Do task B first (most uncertain taks first) because both task take same time, but task B is more uncertain. And thus, we again decrease the expected fail time.

One problem with above analysis is that, we assume, we find out if the task failed or not only at the end of the task. This is not accurate. Instead, we can model the failure as a poisson arrival process, with the task failing with an exponential distribution with parameter \(\lambda\) such that:

\begin{align*} p_{succ} &= e^{-\lambda T} \\ \lambda &= \log (1/p_{succ}) / T \end{align*}

Since we want to discover failures earlier, we to the task with highest failure rate \(\lambda\) first.** For scheduling tasks


Backlinks


You can send your feedback, queries here