Jekyll2022-10-28T18:56:41-07:00/feed.xmlMonte Fischer’s NotebookHis online platform, an internet notebook.Monte FischerFinite Poisson approximation2021-12-16T00:00:00-08:002021-12-16T00:00:00-08:00/poisson-approximation<p>Limit theorems are a beautiful part of probability theory, but in real life data comes in finite samples. The Poisson limit theorem says that binomial distributions \(\mu_n\) with parameters \(p=\lambda/n\) converge in distribution to a Poisson distribution \(\mathcal{P}_\lambda\). Informally, the Poisson distribution models the total number of occurances of i.i.d. Bernoulli rare events (<a href="https://en.wikipedia.org/wiki/Poisson_distribution#Law_of_rare_events">Wikipedia discussion</a>).</p>
<p>Some natural questions follow from the classical limit theorem.</p>
<ul>
<li>What happens if some events are dependent?</li>
<li>What if the events don’t have identical distributions?</li>
<li>How fast is the convergence?</li>
</ul>
<p>I learned from <a href="/diaconis-recs">Persi Diaconis</a> that finitely many events which are “not too dependent” and occur “not too frequently” will behave like the Poisson.</p>
<p>Let’s introduce some notation to understand this. Suppose we have \(n\) events \(X_1, \dots, X_n\), not necessarily independent. Each event \(X_i\) either occurs or does not occur with probability \(p_i\)</p>
\[X_i = \left\{\begin{array}{ll} 1 & \quad \text{w.p.}\ \ p_i \\ 0 & \quad \text{w.p.}\ \ 1-p_i \end{array}\right.\]
<p>We’re relaxing the independence requirement, so we can’t just multiply to get the probability that two events both occur. We’ll use \(p_{ij}\) to denote the probability that event \(X_i X_j\) occurs, i.e. both \(X_i\) and \(X_j\) happen.</p>
<p>Now let \(W=\sum_{i=1}^n X_i\) and let \(\lambda = \sum_{i=1}^n p_i\). The question is then <em>how close is \(W\) to \(\mathcal{P}_\lambda\)</em>?</p>
<p>Naturally, the answer will depend on how the events depend on each other. If each is the same event, then their sum will not behave at all like a Poisson distribution. If they are all independent and rare, then as \(n \to \infty\) we expect the difference in distribution to \(\mathcal{P}_\lambda\) to be very small because of the classical limit theorem.</p>
<p>To deal with dependency, we can introduce a <em>dependency graph</em>. Each node represents one of the events, and we draw edges so that a collection of events \(A\) is independent of another collection \(B\) if and only if \(A\) and \(B\) have no edges between each other in the graph. For example, in the trivial case where each event is independent, the graph has no edges. If only the first two events depend on each other, then there is an edge between nodes 1 and 2, but no other edges.</p>
<p>Finally, we need to introduce a metric on probability distributions to describe what it means for distributions to be “close” in a rigorous way. We’ll use the <a href="https://en.wikipedia.org/wiki/Total_variation_distance_of_probability_measures"><em>total variation distance</em></a>, defined between two probability distributions \(\mu\) and \(\nu\) as</p>
\[||\mu - \nu||_{TV} = \sup_A |\mu(A) - \nu(A)|\]
<p>i.e. the largest discrepency between the probabilities that \(\mu\) and \(\nu\) assign to the same event.</p>
<p>At last we can state the result, gloriously free of limits:</p>
\[|| W - \mathcal{P}_\lambda||_{TV} \leq \min\{3, \lambda^{-1}\} \left( \sum_{i=1}^n \sum_{j \in N(i)} p_{ij} + \sum_{i=1}^{n} \sum_{j \in N(i) \cup \{i\}} p_i p_j \right)\]
<p>where \(N(i)\) denotes the <a href="https://en.wikipedia.org/wiki/Neighbourhood_(graph_theory)">neighborhood</a> of \(i\) in the dependency graph.</p>
<p>The proof of this approximation theorem uses <a href="https://en.wikipedia.org/wiki/Stein%27s_method">Stein’s method</a>, a popular technique in modern probability theory. For further reading, see the paper <a href="https://projecteuclid.org/journals/statistical-science/volume-5/issue-4/Poisson-Approximation-and-the-Chen-Stein-Method/10.1214/ss/1177012015.full">“Poisson Approximation and the Chen-Stein Method” by Arratia, Goldstein, and Gordon</a> and the references I list <a href="/diaconis-recs">here</a>.</p>Monte FischerLimit theorems are a beautiful part of probability theory, but in real life data comes in finite samples. The Poisson limit theorem says that binomial distributions \(\mu_n\) with parameters \(p=\lambda/n\) converge in distribution to a Poisson distribution \(\mathcal{P}_\lambda\). Informally, the Poisson distribution models the total number of occurances of i.i.d. Bernoulli rare events (Wikipedia discussion).A semester of probability with Persi Diaconis2021-12-10T00:00:00-08:002021-12-10T00:00:00-08:00/diaconis-recs<p>In Fall 2021, I took a course on measure-theoretic probability with the great Persi Diaconis. Persi continually suggested books and articles throughout the class, which I have gathered here along with some of his comments.</p>
<p>For more in this vein, see my similar article on <a href="/taleb-probability-theory">Nassim Taleb’s probability library</a>.</p>
<p>General probability theory:</p>
<ul>
<li>Patrick Billingsley, <em>Probability and Measure</em>. This was the main textbook we used in class.</li>
<li>Leo Breiman, <em>Probability</em>. A very good book for regular conditional probability.</li>
<li>Hogg, McKean, and Craig, <em>Introduction to Mathematical Statistics</em>. Diaconis’s favorite elementary probability book.</li>
<li>Billingsley, <em>Convergence of Probability Measures</em>. Very readable reference for weak convergence on metric spaces.</li>
<li>Kallenberg, <em>Foundations of Modern Probability (3rd ed)</em>. “A book on the shelf of every modern probabilist.” Technical, not many stories.</li>
<li>Dudley, <em>Real Analysis and Probability</em>. Good combination of history / stories and rigour.</li>
<li>Feller, Volumes I and II. Full of stories and a classic.</li>
</ul>
<p>Remarks on the Lebesgue integral:</p>
<ul>
<li>For “nice” functions, the Lebesgue integral agrees with Riemann.</li>
<li>Sometimes (e.g. Dirichlet’s function) the Lebesgue integral exists when the Riemann doesn’t</li>
<li>We still need Riemann for some indefinite integrals! \(\int_0^\infty \frac{\sin(x)}{x} dx\) has no Lebesgue integral, but we can Riemann integrate. Similarly, \(\sum_{j=1}^\infty \frac{(-1)^j}{j}\) breaks under Lebesgue integration.</li>
<li>Riemann is also what we use for computations!</li>
<li>The Henstock integral combines the best of both worlds (see the American Math Monthly article on this, presumably the one by <a href="http://classicalrealanalysis.info/documents/Bartle1996-2974874.pdf">Bartle</a>).</li>
<li>Thus “no serious analytical probabilist would throw out the Riemann integral.”</li>
</ul>
<p>Remarks on the strong law of large numbers:</p>
<ul>
<li>Etemadi’s elementary proof of the strong law of large numbers uses what Diaconis calls the “4 Ts argument”. Each of the Ts has “legs”, i.e. is useful in many other places. They are:
<ul>
<li>Truncation</li>
<li>Tchebyshev</li>
<li>inTerpolation</li>
<li>(T)subsequences</li>
</ul>
</li>
<li>Proving \(\frac{S_n}{n} \overset{a.s.}{\to} \mu\) is beautiful, clear, and has absolutely no real-world implication. It tells you nothing quantifiable about <em>n</em>, which is what you would want in practice. The literature has next-to-nothing on this! At least Chebyshev tells you something particular.</li>
</ul>
<p>Remarks on Poisson approximation and Stein’s method:</p>
<ul>
<li>References:
<ul>
<li>Arratia, Goldstein, Gordon. <a href="https://projecteuclid.org/journals/statistical-science/volume-5/issue-4/Poisson-Approximation-and-the-Chen-Stein-Method/10.1214/ss/1177012015.full">“Poisson Approximation and the Chen-Stein Method”</a>, <em>Statistical Science</em>, 1990.
<ul>
<li>Many good examples!</li>
</ul>
</li>
<li>Chatterjee, Diaconis, Meckes. “Exchangeable pairs and Poisson approximation”. <em>Probability Surveys</em>, 2005.
<ul>
<li>This gives Stein’s method of exchangeable pairs.</li>
</ul>
</li>
<li>Sourav Chatterjee, <a href="https://arxiv.org/abs/1404.1392v1">“A short survey of Stein’s method”</a>. ICM proceedings, 2014.
<ul>
<li>A more recent readable survey.</li>
</ul>
</li>
<li>Chen, Goldstein, Shao. <em>Normal Approximation by Stein’s Method</em>. Springer 2011.</li>
<li>Barbour, Holst, Janson, <em>Poisson Approximation</em>. Oxford University Press, 1992.</li>
<li>(for better bound) Barbour and Eagleson. “Poisson Approximation for Some Statistics Based on Exchangeable Trials”. <em>Advances in Applied Probability</em>, 1983.</li>
</ul>
</li>
</ul>
<p>Remarks on the central limit theorem:</p>
<ul>
<li>Sourav Chatterjee, <a href="https://projecteuclid.org/journals/annals-of-probability/volume-34/issue-6/A-generalization-of-the-Lindeberg-principle/10.1214/009117906000000575.full">“A generalization of the Lindeberg principle”</a>. Annals of Probability, 2006.
<ul>
<li>Shows that the main idea of Lindeberg’s proof of the CLT is very general, and can be extended.</li>
</ul>
</li>
<li>S.D. Chatterji, <a href="https://www.sciencedirect.com/science/article/pii/S0723086906000429">“Lindeberg’s central limit theorem a la Hausdorff”</a>. Expositiones Mathematicae, 2007.</li>
<li>Fang and Koike, <a href="https://projecteuclid.org/journals/annals-of-applied-probability/volume-31/issue-4/High-dimensional-central-limit-theorems-by-Steins-method/10.1214/20-AAP1629.full">“High-dimensional central limit theorems by Stein’s method”</a>. Annals of Applied Probability, 2021.</li>
</ul>
<p>Remarks on Fourier analysis:</p>
<ul>
<li>Diaconis is a “user, consumer, and developer” of Fourier analysis on noncommutative groups — famously for his theorem that “seven shuffles suffice” to randomize a deck of cards.</li>
<li>Diaconis, <em>Use of Group Representations in Probability and Statistics</em>.</li>
<li>Feller, volume 2, Ch. 15 is one of the best treatments of characteristic functions.</li>
</ul>
<p>Edgeworth corrections and small sample asymptotics</p>
<ul>
<li>Uses Fourier techniques to get better bounds on CLT results. “Hard, honest work.”</li>
<li>See Bhattachaya and Rao, <em>Normal Approximation and Asymptotic Expansions</em> or Field and Ronchetti, <em>Small sample asymptotics</em>.</li>
</ul>
<p>Some scattered notes from office hours:</p>
<ul>
<li>How does one generate a random <a href="https://en.wikipedia.org/wiki/Contingency_table">contingency table</a> given certain marginals?
<ul>
<li>See e.g. Diaconis and Gangolli, <a href="https://link.springer.com/chapter/10.1007/978-1-4612-0801-3_3">Rectangular Arrays with Fixed Margins</a>.</li>
<li>“Hit and run” algorithms</li>
</ul>
</li>
<li>People care about the Fisher-Yates distribution
<ul>
<li>Diaconis and Efron, <a href="https://purl.stanford.edu/hc313th9149">Generalized variance of the multinomial and Fisher-Yates distributions</a>.</li>
<li>Diaconis and Efron, <a href="https://www.jstor.org/stable/2241103?seq=1#metadata_info_tab_contents">Testing for Independence in a Two-Way Table: New Interpretations of the Chi-Square
Statistic</a>.</li>
</ul>
</li>
<li>Diaconis, Holmes, Shahshahani, <a href="https://statweb.stanford.edu/~cgates/PERSI/papers/sampling11.pdf">Sampling From A Manifold</a>.
<ul>
<li>A perfectly legitimate question — how do you do it?</li>
</ul>
</li>
<li>D’Aristotile, Diaconis, and Freedman, <a href="https://statweb.stanford.edu/~cgates/PERSI/papers/sankhya-1988-merg-prob.pdf">On Merging of Probabilities</a>.</li>
<li>Diaconis, <em>10 Great Ideas About Chance</em>.</li>
<li><a href="https://en.wikipedia.org/wiki/Janos_Galambos">Janos Galambos</a></li>
<li>There is a kind of CLT for Brownian motion (<a href="https://en.wikipedia.org/wiki/Donsker%27s_theorem">Donsker’s theorem</a>).</li>
</ul>
<p>Miscellany</p>
<ul>
<li>Kecrish, <em>Descriptive Set Theory</em>.
<ul>
<li>A beautiful book about a now-fading field of research.</li>
</ul>
</li>
<li>Jeffrey Lagarius, <a href="https://www.ams.org/journals/bull/2013-50-04/S0273-0979-2013-01423-X/S0273-0979-2013-01423-X.pdf">“Euler’s constant: Euler’s work and modern developments”</a>. Bulletin of the AMS.
<ul>
<li>A very nice article on Euler’s constant (which we used several times in class in approximations).</li>
</ul>
</li>
<li>Stephen Stigler’s biographical work on Laplace — recommended.</li>
</ul>Monte FischerIn Fall 2021, I took a course on measure-theoretic probability with the great Persi Diaconis. Persi continually suggested books and articles throughout the class, which I have gathered here along with some of his comments.Practical Attention Conservation2021-08-10T00:00:00-07:002021-08-10T00:00:00-07:00/practical-time-management<h2 id="the-hell-of-digital-distraction">The hell of digital distraction</h2>
<p>One of my college buddies has been working a corporate job for about two years. Last I spoke with him, he regularly works well into the night after spending hours and hours watching comedy specials on YouTube during the workday. He’s embarrassed by this, but keeps finding himself in the same situation. His sleep, leisure, and work performance have suffered because of his habit.</p>
<p>When I worked at Epic, I would take breaks and walk around campus. In about half of the offices I walked past, I would see someone hunched over at their phone, swiping and scrolling.</p>
<p>Let’s be honest: <strong>most of us are using tech as a drug to take the edge off, not as a lever to enrich our lives.</strong></p>
<h3 id="choosing-excellence">Choosing excellence</h3>
<p>Call me an idealist, but I believe that most people don’t <em>want</em> to be this distracted. Humans crave meaningful work and leisure. We want connection with other people and the satisfaction of shared experience. Scrolling through Instagram for the 23rd time today doesn’t cut it.</p>
<p>I’m interested in helping people regain the time they’ve been losing to VC-funded infinite distraction machines. This article covers practical tactics to conserve attention and spend time on the things that matter. In the spirit of having skin in the game, everything in this article is something I have personally found helpful, in three categories.</p>
<ol>
<li><a href="#digital-tools-to-stop-online-grazing">Distraction-proof your digital life</a></li>
<li><a href="#cut-out-the-middleman-analog-tools">Find analog alternatives</a></li>
<li><a href="#do-better-things">Do better things</a></li>
</ol>
<p>At the end of the day, your attention and focus is your own responsibility. These tools are useless if your head isn’t in the right place. But if you’re intentional about exercising ownership over your own time and life, then these tactics might be just what you need.</p>
<h2 id="digital-tools-to-stop-online-grazing">Digital tools to stop online grazing</h2>
<p>Using a computer or phone to do <em>anything</em> productive in today’s tech ecosystem is the rough equivalent of a medieval monk sitting down to write his masterpiece in a well-stocked library that happens to double as a full-time free no-questions-asked brothel. Sure, all the necessary tools are at an arm’s reach, but an intense amount of restraint must be exercised if he is to get anything done.</p>
<p>The nice thing about the digital world is that you can use code to solve problems that other people’s code created. Instead of having to build your own library from the ground up, you can instead wall off the brothel at an absolute minimum of effort.</p>
<h3 id="leechblock">Leechblock</h3>
<p>Use <a href="https://www.proginosko.com/leechblock/">Leechblock NG</a> (<a href="https://addons.mozilla.org/en-US/firefox/addon/leechblock-ng/">Firefox,</a> <a href="https://chrome.google.com/webstore/detail/leechblock-ng/blaaajhemilngeeffpbfkdjjoefldkok">Chrome</a>) to schedule times to block or allow distracting websites. It’s funded by donations, open-source, and provides an excellent level of control to the end-user.</p>
<p>I use Leechblock to prevent myself from defaulting to Hacker News, Twitter, or YouTube every time I open a new tab. I keep these sites on a tight leash because I know how dangerous they are to my own productivity. One trick I’ve found especially helpful: if an outright ban of a distracting site causes you to just disable the plugin, try redirecting to a delaying page. When I’m in a mindset that craves cheap, instant distraction, forcing myself to sit and wait for a minute or two before browsing is often <em>too boring</em>! I’m forced to confront my state of mind instead of numbing it with social media or news aggregation sites.</p>
<p><img src="/assets/img/leechblock-delay.png" style="width: 50%; display: block; margin: 0 auto;" /></p>
<p>If there are particular websites that have you addicted — whose business model is to maximize “engagement” ( = hours of your life spent scrolling) — then Leechblock is a effective, zero-cost way to kick the habit.</p>
<h3 id="rss-feeds">RSS feeds</h3>
<p>The median individual consumes <em>far</em> more news than actually benefits their life and personal situation. Ask yourself the question: what is the <strong>absolute minimum</strong> amount of news I could consume each day / week / month and still enjoy the same quality of life that I do now?</p>
<p>News isn’t just the New York Times. News is Twitter, Reddit, Facebook, Instagram, YouTube, Hacker News — everything that continually publishes and would <em>love</em> for you to receive notifications every time the latest content drops.</p>
<p>I default to blocking addictive social news sites and carve out specific exceptions when I can intentionally browse. But there are also individual blogs whose content I enjoy and value having as part of my life. For these sites, I use RSS feeds to manage my browsing.</p>
<p>RSS is a standard protocol that websites use to expose the articles they publish in a standard format for viewing in a standalone application called an RSS reader. To find a site’s RSS feed, look for the RSS icon <img src="/assets/img/rss-icon.png" style="width: 20px; border: none" />, or try adding <code class="language-plaintext highlighter-rouge">/feed</code> or <code class="language-plaintext highlighter-rouge">/feed.xml</code> to the end of a website’s main URL. You can download and use any number of desktop application RSS readers (I use <a href="https://newsboat.org/">newsboat</a>), or use a slick web solution like <a href="https://feedly.com/">Feedly</a>.</p>
<h3 id="make-your-phone-boring">Make your phone boring</h3>
<p>No matter how good your restraint is on a laptop, it’s meaningless if you’re addicted to your phone. The best solution I have come up with for this is to make my phone as boring as possible.</p>
<h4 id="no-notifications">No notifications</h4>
<p>The only notifications on my phone are from people who have directly called or messaged me. When my phone makes a noise, it is someone I know personally who wants my attention.</p>
<h4 id="delete-social-media-apps">Delete social media apps</h4>
<p>Pretty simple. Delete Twitter, Instagram, Facebook from your phone and you will use them less.</p>
<h4 id="turn-your-phone-off">Turn your phone off</h4>
<p>Another no-brainer. If your phone is buzzing in your pocket while you try to get work done or spend quality time with other people, you’ll end up checking it. Turn off the phone! Some people are in situations where this isn’t feasible, but many people are just afraid of not being reachable. Is it worth the price you pay by constantly dividing your attention?</p>
<h4 id="remove-your-browser">Remove your browser</h4>
<p>This is the nuclear option. If, like me, you naturally tend towards having 300 tabs open on your phone browser, the browser itself can be a major source of distraction.</p>
<p>If you use an iPhone, you can set up limits for apps in Screen Time. I choose to completely block Safari, and download Firefox Focus whenever I need a browser (e.g. traveling) . Firefox Focus is limited to only one tab at a time, which prevents tab accumulation and makes it very easy to delete the app when I don’t need a browser anymore.</p>
<h3 id="no-email-on-your-phone">No email on your phone</h3>
<p>It’s hard to remove email completely from your phone. At the time of writing, I still have the <a href="https://www.fastmail.com/">Fastmail</a> app installed. In the past, I have experimented with removing email from my phone completely. If you are in a situation in which you can afford to do this, I found it to be an excellent way to stop myself from constaintly checking my phone.</p>
<h4 id="grayscale">Grayscale</h4>
<p>If you have an iPhone, you can set up a grayscale color filter (Settings > Display & Text Size > Color Filters) to make your phone look like a black-and-white film. I have found that it has the effect of reducing the visual stimulation I receive by glancing at your phone and reminding myself to limit my usage.</p>
<h3 id="use-unsexy-tech">Use unsexy tech</h3>
<p>My everyday laptop is a Lenovo Thinkpad X220 from 2011, bought used on eBay for $80. I used it to research, write, and defend my master’s thesis on deep learning in computational photography. It’s an unsexy, corporate-looking business laptop that I use with Linux Mint and a vanilla Xfce4 desktop environment. In the past I used Arch Linux with i3, but eventually decided that spending a bunch of time crafting a Linux DIY desktop environment was just another kind of distraction.</p>
<p>My phone, a 2016 iPhone SE, has such a small screen that it’s almost unpleasant to browse media content on. The result? I spend less time on my phone.</p>
<p>This advice isn’t for everyone, and can sometimes even be counterproductive when frustration with older technology gets me into a distracted and irritated state. I count it better than the extreme opposite of upgrading my phone and computer into digital candy stores, but there’s plenty of middle ground.</p>
<h2 id="cut-out-the-middleman-analog-tools">Cut out the middleman: analog tools</h2>
<p>Paper books do not have hyperlinks. It’s not possible to watch cat videos on your spiral-bound notebook. You’re not going to accidentally find yourself browsing Twitter on a whiteboard.</p>
<p>When you can use a specialized analog tool for your work instead of a digital solution, you reduce the number of opportunities for digital distraction. In my personal experience, screens fatigue me faster than paper.</p>
<h3 id="books">Books</h3>
<p>It’s very tempting to satisfy your curiosity with Google searches and Wikipedia articles. The internet gets hyped as the digital library of Alexandria but high-quality, well-researched content is in the minority online. Books are still the best way to learn about a subject in depth.</p>
<p>I spend a lot of time reading physical books, both for information and entertainment. I keep a list of what I’ve read <a href="/reading">here</a>. If owning a bunch of heavy books isn’t your thing, public libraries still exist! They are free, and you can use inter-library loans to request books from across your state.</p>
<h3 id="pen-and-paper">Pen and paper</h3>
<p>I’m yet to have a digital notetaking experience superior to the pen and small paper notebook in my front pocket. I can turn my notebook sideways and draw a diagram that would cost me a specialized app and five times the time to create on my iPhone. I can seamlessly insert doodles or drawings in between my notes, or scribble down some mathematics. Even if you can buy an iPhone app which can perform all these functions, it won’t be as fast or seamless as pen and paper. I use <a href="https://www.amazon.com/Maruman-Nimoshine-ruled-paper-N193A/dp/B00T9CHYZO/ref=sr_1_20?dchild=1&keywords=mnemosyne&qid=1628640264&sr=8-20">Mnemosyne notebooks</a> right now, but there are tons of options out there.</p>
<p>I personally have yet to experience the kind of productivity gains from notetaking apps that many proponents claim. In general, notes that simply replicate passages or concepts from a main text don’t do much for me. From my experience in college, it’s impossible to learn mathematics without working out problems. Until you have a reason to use knowledge, it’s difficult and unnecessary to remember it. Instead, work problems and use spaced repitition software like <a href="https://apps.ankiweb.net/">Anki</a> for those things you absolutely must remember.</p>
<h3 id="kitchen-timer">Kitchen timer</h3>
<p>When I decided to spend at least two hours reading every day, I bought a <a href="https://www.amazon.com/Digital-Kitchen-Magnetic-Multi-function-Teachers/dp/B07SWZRVRP/ref=sr_1_14?dchild=1&keywords=kitchen%2Btimer&qid=1628640626&sr=8-14&th=1">cheap kitchen timer</a> to track myself. Could I have done this through my iPhone’s clock app at zero cost? Yes. But the only time my cheap kitchen timer has ever distracted me from my book was when I misplaced it and had to go looking. No notifications, no email, no analytics or web tracking, no fuss. I can put the timer in front of me to monitor how much time I’ve actually spent on the task at hand. Starting or stopping the times involves no passcode, no switching apps, no checking my email — just a simple button press.</p>
<p>When we measure how much time it takes us to actually get something done, or how much time we really spend on something we say we value, there’s no room for BS or excuses. Measurement is inconvenient, but clarifying. It’s worth decoupling this from sources of distraction.</p>
<h3 id="reset-your-physiology">Reset your physiology</h3>
<p>When it is impossible to focus, I go for a walk.</p>
<p>If it’s still impossible to focus when I’m done, I drink some water, slow my breathing, and go for another walk. Sometimes it’s not worth fighting my own physiology. If I ate sugary foods or a huge, carb-heavy lunch, it’s no wonder that my head is foggy. If I haven’t been to the gym in a week, my brain isn’t in good shape either. Physical movement and hydration are helpful ways to do a small reset.</p>
<h2 id="do-better-things">Do better things</h2>
<blockquote>
<p>“When the unclear spirit has gone out of a man, he passes through waterless places seeking rest, but he finds none. Then he says, ‘I will return to my house from which I came.’ And when he comes he finds it swept and put in order. Then he goes and brings seven other spirits more evil than himself, and they enter and dwell there; and the last state of that man becomes worse than the first.” — <em>Luke 11:24–26</em></p>
</blockquote>
<blockquote>
<p>Since procrastination is a message from our natural willpower via low motivation, the cure is changing the environment, or one’s profession, by selecting one in which one does not have to fight one’s impulses. — Nassim Taleb, <em>Antifragile</em></p>
</blockquote>
<p>If you constantly return to digital distraction because you don’t have anything else better to do, the solution isn’t an endless struggle to eliminate distractions. Instead, <strong>find something better to do</strong>. In Taleb’s language, procrastination is a signal conveying valuable information: <em>I am bored</em>. If this is a very loud signal, it’s worth reevaluating what you’re doing.</p>
<p>It’s possible to take this view too far. There will always be parts of life that are routine, uninteresting, but necessary. It’s also possible to develop digital addictions that take away time from things you really do value and want to be doing more of. For most, a mix of both approaches is appropriate.</p>Monte FischerThe hell of digital distractionMax Weber and the Spirit of Capitalism2021-04-10T00:00:00-07:002021-04-10T00:00:00-07:00/weber-capitalism<p>Max Weber argues in <em>The Protestant Ethic and the Spirit of Capitalism</em> that the discipline of “worldly asceticism” that Puritanism imposed on its adherents laid the psychological foundation for the <em>spirit</em> of capitalistic gain that we all know and (more-or-less) love today. There are a couple pieces to this.</p>
<ol>
<li>What is meant by capitalism?</li>
<li>Why does capitalism need a “spirit”? What does that mean?</li>
<li>Why Puritanism? What distinguished it from Catholicism and Lutheranism as regards economic activity and the social order?</li>
<li>What does worldly asceticism mean?</li>
</ol>
<p>In this post, I’ll try to answer the first two points.</p>
<h2 id="the-spirit-of-capitalism">The Spirit of Capitalism</h2>
<p>A certain stance on capitalism which I have often heard is what might be called the “free exchange theory”.</p>
<blockquote>
<p>Capitalism is a completely natural form of human behavior. For all of history, people have been engaging in free exchange for mutual benefit. Capitalism is no more, and no less, than free exchange for mutual benefit.</p>
</blockquote>
<p>Weber has a different view of the matter. Although he defines “capitalistic exchange” as the (in principle) free exchange described above, Weber thinks it is important to also look at the attitudes people have towards wealth and its role in their lives – the <em>ethic</em>, or spirit, of wealth. A system which allows free exchange for mutual benefit is not capitalistic <em>as such</em>. An agricultural society in which all families engaged in just enough work and free exchange to satisfy their (moderate) wants and desires before turning to the other goods in life (such as the enjoyment of family life, religious contemplation, and spontaneous idleness) would not count as capitalistic in Weber’s book.</p>
<p>Another view of capitalism that I have heard expressed is the “greed theory”:</p>
<blockquote>
<p>Capitalism is an economic system which allows and encourages unlimited greed.</p>
</blockquote>
<p>This does posit an attitude that people take towards their wealth and pursuit of economic gain, but it also does not satisfy Weber. Consider a society (history provides many precedents) in which the spirit of ruthless conquest and pillage dominates, where the production of an agricultural base population is continually extorted by warlords, bandits, and other violent forces. Everyone in this society can be as greedy as he or she wishes, but Weber would still not call it capitalism but indeed its very opposite.</p>
<p>Instead, for Weber, capitalism is neither of these things,</p>
<blockquote>
<p>But capitalism is identical with the pursuit of profit, and forever <em>renewed</em> profit, by means of continuous, rational, capitalistic exchange. [1] p.17.</p>
</blockquote>
<p>I think that is a very nice definition of capitalism. The central idea is the pursuit of renewed profit, which is <em>not</em> synonymous with exchange itself. Weber mentions an interesting historical example of this in his discussion of raising wages. In order to encourage field laborers to harvest as many crops as possible before they spoil, a farmer might pay a rate not in terms of hours worked but in terms of crops harvested. By raising the rate, a farmer would hope to encourage a quick and efficient harvest. Often, however, the higher rate had the exact opposite effect! Laborers would actually <em>reduce</em> the size of their harvest, reasoning that they could work less hard but still obtain compensation sufficient for their own needs.</p>
<p>Such laborers are following a very different protocol — they are indwelt by the spirit of <em>traditionalism</em>, not capitalism. Traditionalism has a certain rationality of its own: work is unpleasant and performed insofar as it serves the needs of one’s life and the enjoyment thereof. One might spitefully say that laborers who did so were lazy, unmotivated, or otherwise incorrect. This would be to judge traditionalism from the vantage point of capitalism. Equally, the traditional thinker might find the capitalist absurd — a man living not for himself and his own ends, but for the sake of his wealth and its increase.</p>
<p>The West used to be more or less dominated by the spirit of traditionalism. Today, it is unquestionably ruled by the spirit of capitalism. How did this happen? The free exchange theory I mentioned above does not account for this; neither does the greed theory. People are not more or less greedy today than they were yesterday; free exchange is certainly much easier today than it was in the past but the end towards which free exchange is applied has completely changed from the needs of the individual to the needs of his capital. Weber’s explanation is that the ethical influence of the Protestant faith, and Puritanism in particular, replaced the traditionalist spirit with something much, much closer to the spirit of capitalism. Eventually the religious element withered away, leaving only the spirit of capitalism.</p>
<h2 id="bibliography">Bibliography</h2>
<p>[1] — Max Weber, <em>The Protestant Ethic and the Spirit of Capitalism</em>.</p>Monte FischerMax Weber argues in The Protestant Ethic and the Spirit of Capitalism that the discipline of “worldly asceticism” that Puritanism imposed on its adherents laid the psychological foundation for the spirit of capitalistic gain that we all know and (more-or-less) love today. There are a couple pieces to this.The QWERTY-Dvorak Permutation2021-03-21T00:00:00-07:002021-03-21T00:00:00-07:00/dvorak-group-action<p>I use the Dvorak keyboard layout instead of the standard QWERTY. I’m non-zealous about this choice – it’s simply what I learned to type on in ninth grade, and has been getting in the way ever since. By now, QWERTY feels horribly awkward to type with and I figure that I’m stuck with the Dvorak choice.</p>
<center> <h4>Dvorak</h4> </center>
<p><img class="centered" src="/assets/img/dvorak.png" style="width:80%;" /></p>
<p>When people look at my keyboard, they see what any other keyboard would look like. I don’t remap my keys at the hardware level. Instead, I remap them at the software level, and rely completely on muscle memory for typing. Very hard to hunt and peck when the keys are not labeled correctly! This has the amusing side-effect that anyone who tries to type on my computer is immediately bewildered. People make some funny guesses about how the keys have been remapped – a common first guess is that everything is shifted over one. No, no. There is no rhyme or reason to the mapping itself, just one arbitrary permutation of keys to another.</p>
<p>Wait a minute. Permutations?</p>
<p>This sounds like group theory! I didn’t major in mathematics for nothing, after all. Let’s suppose that the QWERTY layout is “standard order”, and investigate the Dvorak permutation that maps each key of the standard QWERTY layout to its respective Dvorak key. There are 26 letters and 9 punctuation characters that get switched around (ignoring the shift key) for a total of 35 symbols. We’ll call this permutation \(\pi\).</p>
<center> <h4> Qwerty </h4></center>
<p><img class="centered" src="/assets/img/qwerty.png" style="width:80%;" /></p>
<center> <h4>Dvorak</h4> </center>
<p><img class="centered" src="/assets/img/dvorak.png" style="width:80%;" />
<br /></p>
<p>As anyone with some group theory under their belt can tell you, this mapping is a permutation of the set of 35 elements. Such a permutation can be decomposed into a product of cyclic permutations, for example Qwerty <code class="language-plaintext highlighter-rouge">w</code> goes to Dvorak <code class="language-plaintext highlighter-rouge">,</code> and as it happens Qwerty <code class="language-plaintext highlighter-rouge">,</code> goes to Dvorak <code class="language-plaintext highlighter-rouge">w</code>. This is a small, length 2 cyclic permutation — but they can be longer, of the general form \(A \to B \to C \to \dots \to A\). Group theory says that any permutation you can think of can be decomposed into combination, or product, of such cycles, where no two cycles share elements (i.e. they are <em>disjoint</em> cycles).</p>
<p>A quick Python script can compute the cycles for us (<code class="language-plaintext highlighter-rouge">dv_map</code> is a dictionary encoding the action of \(\pi\) on each of the 35 characters).</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def find_cyclic_decomposition():
letters = list(dv_map.keys())
cycles = []
while len(letters) > 0:
first_letter = letters[0]
curr_letter = dv_map[letters[0]]
this_cycle = [first_letter]
while curr_letter != first_letter:
this_cycle.append(curr_letter)
curr_letter = dv_map[curr_letter]
cycles.append(this_cycle)
for element in this_cycle:
letters.remove(element)
return cycles
</code></pre></div></div>
<p>The cyclic decomposition of \(\pi\) thus computed is as follows (the parenthesis are included to tell you where the cycle begins and ends; parenthesis characters themselves are not in different places between QWERTY and Dvorak):</p>
<p>\(\pi =\)</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>( - [ / z ; s o r p l n b x q ' )
* ( e . v k t y f u g i c j h d )
* ( = ] )
* ( w , )
* ( a )
* ( m )
</code></pre></div></div>
<p>Very interesting! One cycle of order 15, one of order 14, two of order 2, and two singletons.</p>
<p>Suppose that I had a hardware Dvorak keyboard, and <em>also</em> applied the software mapping. Then I would have mapped the QWERTY keys to Dvorak in hardware, and then <em>again</em> in software. This is equivalent to applying the permutation \(\pi^2\). I wonder how many times we would have to apply the QWERTY-Dvorak transformation until we got back to the QWERTY layout? In group-theoretic terms, we’re asking for the <em>order</em> of \(\pi\) in \(S_{35}\). This is simply the least common multiple of the lengths of all the cycles in the cyclic decomposition: 210.</p>Monte FischerI use the Dvorak keyboard layout instead of the standard QWERTY. I’m non-zealous about this choice – it’s simply what I learned to type on in ninth grade, and has been getting in the way ever since. By now, QWERTY feels horribly awkward to type with and I figure that I’m stuck with the Dvorak choice.Nassim Taleb’s probability theory library2020-03-29T00:00:00-07:002020-03-29T00:00:00-07:00/taleb-probability-theory<p>Here are some books on probability theory that have been recommended by Nassim Taleb, along with his commentary (when available).</p>
<p><strong>Fundamentals</strong></p>
<ul>
<li>Feller, <i>An introduction to probability theory and its applications, Vol. 1 & 2</i></li>
<li>
<details>
<summary>Papoulis, <i>Probability, Random Variables and Stochastic Processes</i></summary>
<blockquote>
<p>When readers and students ask to me for a useable book for nonmathematicians to get into probability (or a probabilistic approach to statistics), before embarking into deeper problems, I suggest this book by the Late A. Papoulis. I even recommend it to mathematicians as their training often tends to make them spend too much time on limit theorems and very little on the actual “plumbing”.</p>
<p>The treatment has no measure theory, cuts to the chase, and can be used as a desk reference. If you want measure theory, go spend some time reading Billingsley. A deep understanding of measure theory is not necessary for scientific and engineering applications; it is not necessary for those who do not want to work on theorems and technical proofs.</p>
<p>I’ve notice a few complaints in the comments section by people who felt frustrated by the treatment: do not pay attention to them. Ignore them. It the subject itself that is difficult, not this book. The book, in fact, is admirable and comprehensive given the current state of the art.</p>
<p>I am using this book as a benchmark while writing my own, but more advanced, textbook (on errors in use of statistical models). Anything derived and presented in Papoulis, I can skip. And when students ask me what they need as pre-requisite to attend my class or read my book, my answer is: Papoulis if you are a scientist, Varadhan if you are more abstract.</p>
</blockquote></details></li>
<li> Loeve, <i>Probability Theory I & II</i></li>
<li> Billingsley, <i>Probability and Measure</i> (Borel)</li>
<li>
<details>
<summary>Varadhan, <i>Probability Theory (Courant Lecture Notes)</i></summary>
<blockquote>
<p>I know which books I value when I end up buying a second copy after losing the first one. This book gives a complete overview of the basis of probability theory with some grounding in measure theory, and presents the main proofs. It is remarkable because of its concision and completeness: visibly prof Varadhan lectured from these notes and kept improving on them until we got this gem. There is not a single sentence too many, yet nothing is missing.</p>
<p>For those who don’t know who he is, Varadhan stands as one of the greatest probabilists of all time. Learning probability from him is like learning from Aristotle.</p>
<p>Varadhan has two other similar volumes one covering stochastic processes the other into the theory of large deviations, <i>Large Deviations (Courant Lecture Notes)</i> (though older than this current text). The book on Stochastic Processes, <i>Stochastic Processes (Courant Lecture Notes)</i> should be paired with this one.</p>
</blockquote>
</details>
</li>
<li>Borel, <a href="https://link.springer.com/article/10.1007/BF03019651">Les probabilités dénombrables et leurs applications arithmétiques</a>, 1909. For general intuition.</li>
<li>Kolmogorov, On logical foundations of probability theory.</li>
</ul>
<p><strong>Stochastic Processes</strong></p>
<ul>
<li>Karatzas and Shreve, <em>Brownian Motion and Stochastic Calculus</em></li>
<li>Doob, <em>Stochastic Processes</em></li>
<li>Oksendal, <em>Stochastic differential equations</em>, 2013.</li>
<li>Varadhan, Stochastic processes, 2007.</li>
</ul>
<p><strong>Information Theory</strong></p>
<ul>
<li>Cover and Thomas, <em>Elements of Information Theory</em></li>
</ul>
<p><strong>Extreme Value Theory</strong></p>
<ul>
<li><details>
<summary>Embrechts et al., <i>Modelling Extremal Events: for Insurance and Finance</i></summary>
<blockquote>
<p>The mathematics of extreme events, or the remote parts of the probability distributions, is a discipline on its own, more important than any other with respect to risk and decisions since some domains are dominated by the extremes: for the class of subexponential (and of course for the subclass of power laws) the tails ARE the story.
</p>
<p>Now this book is the bible for the field. It has been diligently updated. It is complete, in the sense that there is nothing of relevance that is not mentioned, treated, or referred to in the text. My business is hidden risk which starts where this book stops, and I need the most complete text for that.</p>
<p>In spite of the momentous importance of the field, there is a very small number of mathematicians who deal with tail events; of these there is a smaller group who go both inside and outside the “Cramer conditions” (intuitively, thin-tailed or exponential decline).</p>
<p>It is also a book that grows on you. I would have given it a 5 stars when I started using it; today I give it 6 stars, and certainly 7 next year.</p>
<p>I am buying a second copy for the office. If I had to go on a desert island with 2 probability books, I would take Feller’s two volumes (written >40 years ago) and this one.</p>
<p>One housecleaning detail: buy the hardcover, not the paperback as the ink quality is weaker for the latter.</p>
</blockquote>
</details>
</li>
<li>De Haan and Fereira, <i>Extreme Value Theory: An Introduction</i> </li>
</ul>
<p><strong>Limit Theorems</strong></p>
<ul>
<li>Gnedenko & Kolmogorov, <em>Limit Distributions for Sums of Independent Random Variables</em></li>
</ul>
<p><strong>Stable Distributions</strong></p>
<ul>
<li>Uchaikin & Zolotarev, <em>Chance and Stability, Stable Distributions and Their Applications</em></li>
<li>Samorodnitsky and Taqqu, <em>Stable non-Gaussian random processes: stochastic models with infinite variance</em>, 1994.</li>
<li>Zolotarev, One-dimensional stable distributions, 1986.</li>
</ul>
<p><strong>Subexponentiality</strong> (papers)</p>
<ul>
<li>Pitman, <a href="https://www.cambridge.org/core/journals/journal-of-the-australian-mathematical-society/article/subexponential-distribution-functions/DC70266DA35D487BF53B9B8AD852909C">Subexponential distribution functions</a>, 1980.</li>
<li>Embrechts and Goldie, <a href="https://www.sciencedirect.com/science/article/pii/0304414982900138">On convolution tails</a>, 1982.</li>
<li>Embrechts et al., <a href="https://link.springer.com/content/pdf/10.1007/BF00535504.pdf">Subexponentiality and infinite divisibility</a>, 1979.</li>
<li>Chistyakov, <a href="https://epubs.siam.org/doi/abs/10.1137/1109088">A theorem on sums of independent positive random variables and its application to branching random processes</a>, 1964.</li>
<li>Goldie, <a href="https://www.cambridge.org/core/journals/journal-of-applied-probability/article/abs/subexponential-distributions-and-dominatedvariation-tails/12C230EF598D74D71E864B31E51138ED">Subexponential distributions and dominated-variation tails</a>, 1978.</li>
<li>Teugels, <a href="https://projecteuclid.org/journals/annals-of-probability/volume-3/issue-6/The-Class-of-Subexponential-Distributions/10.1214/aop/1176996225.full">The class of subexponential distributions</a>, 1975.</li>
</ul>
<p><strong>Philosophy</strong></p>
<ul>
<li>
<details>
<summary>
Franklin, <i>The Science of Conjecture: Evidence and Probability before Pascal</i>
</summary>
<blockquote>
<p>Indispensable. As a practitioner of probability, I’ve read many book on the subject. More are linear combinations of other books and ideas rehashed without real understanding that the idea of probability harks back the Greek pisteuo (credibility) and pervaded classical thought. Almost all of these writers made the mistake to think that the ancients were not into probability. And most books such Bernstein, *Against the Gods* are not even wrong about the notion of probability: odds on coin flips are a mere footnote. If the ancients were not into computable probabilities, it was not because of theology, but because they were not into games. They dealt with complex decisions, not merely probability. And they were very sophisticated at it.</p>
<p>This book stands above, way above the rest: I’ve never seen a deeper exposition of the subject, as this text covers, in addition to the mathematical bases, the true philosophical origin of the notion of probability. In addition Franklin covers matters related to ethics and contract law, such as the works of the medieval thinker Pierre de Jean Olivi, that very few people discuss today.</p>
</blockquote>
</details>
</li>
</ul>
<p>Taleb has also remarked on Twitter that Stoyanov, <em>Counterexamples in Probability</em> is a good read.</p>
<h3 id="sources">Sources</h3>
<p>Many of the above books came from <a href="https://twitter.com/nntaleb/status/1215701140546969600?s=21">one of Taleb’s tweets</a> and page 87 of his <em>Statistical Consequences of Fat Tails</em>, 2020. <a href="https://www.amazon.com/gp/profile/amzn1.account.AHMHNR4MRTDLMBOOT6Q7LX2WP5YA/ref=as_li_ss_tl?ie=UTF8&linkCode=sl2&tag=nntbr-20&linkId=6151d1194422622b794dc139b312195d&language=en_US">Taleb’s Amazon recommendations</a> are another source.</p>
<p>A very good compilation of all of Taleb’s Amazon recommendations (as of 2012) is available <a href="https://fs.blog/2012/02/book-recommendations-from-nassim-taleb/">on Farnam Street</a>. Here I have limited the selections to probability theory alone; for mathematical finance, statistics, philosophy, etc., consult Farnum Street or <a href="https://www.amazon.com/gp/profile/amzn1.account.AHMHNR4MRTDLMBOOT6Q7LX2WP5YA/ref=as_li_ss_tl?ie=UTF8&linkCode=sl2&tag=nntbr-20&linkId=6151d1194422622b794dc139b312195d&language=en_US">Taleb’s Amazon page</a> for the most recent reviews (e.g. Hastie, <em>Elements of Statistical Learning</em> and Goodfellow, <em>Deep Learning</em>).</p>Monte FischerHere are some books on probability theory that have been recommended by Nassim Taleb, along with his commentary (when available).Using TensorBoard with a Google Cloud Platform Instance2020-02-20T00:00:00-08:002020-02-20T00:00:00-08:00/tensorboard-with-gcp<p>It took me a little while to figure out how to set up TensorBoard on the Google Cloud Platform instance I use for my master’s research. This is a quick write-up of what I figured out to do; perhaps it will help someone else.</p>
<p>TensorBoard is a useful web application that visualizes Tensorflow networks. The official resources for TensorBoard tell you all about how to code your network so that it produces log files that TensorBoard can understand and visualize. Google Cloud Platform (GCP) offers a way to rent out powerful machines from Google, and offers $300 of free credits for first-time users. Assuming you’ve gotten GCP set up, have installed the gcloud command line tool, and have some log files that can be used by TensorBoard on your GCP instance, here’s what you do next.</p>
<p>Start up your GCP instance, connect to it, and launch TensorBoard</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ gcloud compute instances start [INSTANCE_NAME]
$ gcloud compute ssh [INSTANCE_NAME]
user@MY_INSTANCE:~$ tensorboard --logdir LOG_DIRECTORY
</code></pre></div></div>
<p>Assuming that port 6006 is open, this will start hosting TensorBoard from <code class="language-plaintext highlighter-rouge">LOG_DIRECTORY</code> at port 6006 of your GCP instance. Problem is, you’d like to be able to see what TensorBoard is saying from your local machine. The solution is classic port forwarding. You can set up an <code class="language-plaintext highlighter-rouge">ssh</code> connection to your GCP instance such that one of your local ports gets “forwarded” to a port on the remote GCP instance. That way, whenever you go to look at that local port on your machine, you can see through the <code class="language-plaintext highlighter-rouge">ssh</code> tunnel to what’s being hosted on your GCP instance’s port 6006. As a command, you do this in a terminal on your local host:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ gcloud compute ssh [INSTANCE_NAME] -- -NfL 6006:localhost:6006
</code></pre></div></div>
<p>Now open up a web browser and connect to <code class="language-plaintext highlighter-rouge">localhost:6006</code>. You will be taken, through the ssh port forward, to port 6006 on your GCP instance! You should see the TensorBoard dashboard appear. That’s it! A simple practical lesson in port forwarding.</p>
<p>P.S. If, like me, you are mucking around with TensorBoard and get annoyed at the multiple processes you have spawned clogging up your ports, you can use use <code class="language-plaintext highlighter-rouge">$ netstat -anp | grep 6006</code> to find the PID of the offending process and stop it with <code class="language-plaintext highlighter-rouge">$ kill -9 [PID]</code></p>Monte FischerIt took me a little while to figure out how to set up TensorBoard on the Google Cloud Platform instance I use for my master’s research. This is a quick write-up of what I figured out to do; perhaps it will help someone else.