Sunday, December 8, 2024

Writing down (and searching through) every UUID

https://eieio.games/blog/writing-down-every-uuid/

I think the site is great. I can quickly find my favorite UUIDs and star them or browse them all to find one that’s just right.

But having 5,316,911,983,139,663,491,615,228,241,121,378,304 2 possible values made it way harder than it needed to be to write them all down. I’m not sure why the authors of the UUID spec wanted to include so many bits!

So I think the final implementation here is pretty interesting. Let me tell you about it.

This problem had a few major challenges:

  • Browsers do not want to render a window that is over a trillion trillion pixels high, so I needed to handle scrolling and rendering on my own 3
  • I didn’t want to generate UUIDs in order from first to last. We all know the good UUIDs are in the middle! So I needed a way to generate UUIDs that ensured that I generated them all once.
  • Since I was handling scrolling and rendering on my own, ctrl-f didn’t really work for search. I wanted to search through every UUID, not just the ones I could see! So I had to implement that too.

Thursday, October 31, 2024

Buy, Borrow, Die - Explained

Step 1A. Buy.

This stage of the planning really is that simple. Peter will purchase an asset for $50M. His "basis" in the asset is therefore $50M. Let's assume the asset appreciates at an annual rate of 8 percent. After 10 years, the asset now has a fair market value of $108M and Peter has a "built-in" (or "unrealized") capital gain of $58M.

If Peter sells the asset, it's a "realization" event and he'll be subject to income tax. The asset is a capital asset, and since Peter has owned the asset for more than 1 year, he'd receive long-term capital gain treatment and pay income tax at preferential rates if he sold it. Nevertheless, Peter's long-term capital gain rate would be 20 percent, he'd be subject to the net investment income tax of 3.8 percent, and Peter lives in Quahog which has a 5 percent income tax rate.

So, if Peter were to sell the asset and cash in on his gain, he'd have a total tax liability of around $17M, and his after-tax proceeds would be $91M.

Peter's buddy Joe overheard some of his cop buddies talking about how the ultrawealthy never pay taxes because they implement "buy, borrow, die," and he shares the idea with Peter. Peter decides to look into it.

Step 2A. Borrow.

Peter goes to the big city and hires a private wealth attorney, who connects him with an investment banker at Quahog Sachs. The investment bank might give Peter a loan or line of credit of up to $97M (a "loan to value" ratio of 90 percent) based on several conditions, including that the loan/line of credit is secured by the asset. Now Peter has $97M of cash to use as he pleases, and he's paid no taxes.

Step 3A. Die.

Peter has been living off these asset-backed loans/lines of credits and his asset has continued appreciating in value. Let's say 35 years have passed. With an annual rate of return of 8 percent, the asset now has a fair market value of $740M.

Then Peter dies. When Peter dies, the basis of the asset is "adjusted" to the asset's fair market value on Peter's date of death. In other words, Peter's basis of $50M in the asset is adjusted to $740M.

Peter's estate can now sell the asset tax free, because "gain" is computed by subtracting adjusted basis from the sales proceeds ($740M sales proceeds less $740M adjusted basis equals $0 gain).

Peter's estate can use the cash to pay back the loans/lines of credits. He's paid no income tax and his beneficiaries can now use the cash to buy assets and begin the "buy, borrow, die" cycle themselves.


https://old.reddit.com/r/BuyBorrowDieExplained/comments/1f26rsf/buy_borrow_die_explained/

Friday, September 27, 2024

Using Snapraid for drive redundancy and quasi-backups

Intro

I recently added some hard drives to my home media server, and have finally decided to add some redundancy there.  I've long resisted using RAID, because it only protects against one very specific form of data loss (drive failure), while ignoring things like file corruption, or just user error.  On the other hand, mirroring data to another drive is such a waste of disk space that I could never bring myself to do it, especially for stuff I could ultimately redownload if I needed to.  I'll explain my solution, although if you've read the title you may already have guessed what it is.  First though, I'd like to go over what RAID is and why I don't like it.

What is RAID?

If you know what RAID is, you can skip over this section.  Alternatively you can read this better overview.

RAID stands for Redundant Array of Inexpensive Disks.  The idea being you buy cheaper drives, which may be more likely to fail, but then have some redundancy between them, so that if any do fail you don't lose data.

There are many types of RAID setups each with different numbers, and we'll go over a few of them.  All RAID setups share some things like all your drives are combined to look like one large drive.  There are four criteria that vary between the different RAID types to consider when thinking about different RAID setups:

  1.   How many drives can I lose before I lose data?
  2.   Space efficiency (how many drives are wasted on redundancy?)
  3.   Read performance
  4.   Write performance

RAID 1

RAID 1 is the simplest; it is just 2 drives where they each hold a copy of your data.  In RAID 1 if you have two 12 TB drives it would look like you just have one 12 TB drive, but there would be a real time copy of it on the second drive.  If you have more than 2 drives the same concept applies, where half the drives are devoted to mirroring the other drives.

In RAID 1 you can lose half your drives before you lose data, however, if you have more than 2 drives that will depend on which drives fail, since they are paired up.  If you had a 6 drive RAID 1 setup, you could lose data with 2 drive failures, or be fine with 3 failures, depending on which drives fail (but you'd always be safe for 1 drive failure).

Lastly, the read and write performance of RAID 1 is interesting.  While the write speed isn't affected much, the read speed is roughly doubled.  This is because half of each file can be read from each drive.  Which leads us to our next RAID level...

RAID 0

RAID 0 is similar to RAID 1, except only half of each file is stored on each drive.  This means that no single drive has any full file.  The benefit of this is that you get roughly double read and write speed, since you are only reading and writing half the file on any single drive.  RAID 0 also doesn't lose any drives to redundancy.  If you have two 12 TB drives, your RAID 0 system will have 24 TB available.

If you haven't figured it out already, there is one downside to RAID 0.  You will lose all your data if you lose a single drive.  RAID 0 has no redundancy (hence why it's RAID "0"), and in fact, it's worse than not using any RAID, because a single drive failure will destroy all your data on both drives, rather than just one.

RAID 0 isn't typically used by itself, but often combined with RAID 1 to form various permutations of RAID 10.  RAID 10 is just 4 drives in some combination of RAID 1 and 0.  I'm going to skip over giving more details because this background section is already grown much too long.

RAID 5

This is where RAID gets interesting.  In RAID 5 you have 3 or more drives, and you lose exactly 1 of those to parity data.  We haven't mentioned "parity" yet, but the idea is that we can calculate a checksum based on the combined data on the rest of the drives, and that checksum is enough to recover the data if one of those drives fails.

With RAID 5 you lose one drive to redundancy, which means it becomes more efficient as you get more drives (with 2 drives it would be the same efficiency as RAID 1, while having worst performance).  The read performance of RAID 5 is the same as no RAID, but the write speed takes a hit because the parity needs to be calculated in real time for everything you write to the disk.

Another thing to consider is that you can safely handle a single drive failure, but as the number of drives you have goes up, the odds of a second drive failing before you are able to recover the first drive goes up.  Therefore, it's a balancing act of how many drives you "waste" on parity vs how many you have for data.

RAID 6

RAID 6 is just RAID 5, but with two parity drives instead of one.  Everything I said about RAID 5 applies, just with two drives for redundancy vs one.  This makes more sense as you get more drives, but the trade off is losing more usable space.  The key thing here is that you can lose the same number of drives as you have parity drives.  It doesn't matter if the lost drives are data drives or parity, or any combination of them.  If you have 2 parity drives, and you lose 2 drives, you will be able to recover.

This page is a good summary of the RAID levels, including some I didn't talk about.

What is parity?

I want to explain what I mean by "parity" when discussing redundancy, mainly because I think it's a pretty cool concept.  Again, feel free to skip this section if you understand what parity is.

In its simplest form, parity is just combining the bits on the drive using the XOR function.  XOR stands for exclusive OR.  I'm going to restrain myself from explaining XOR in depth, and just say for our purposes, XOR asks the question "Are there an odd number of 1 bits?".  So if you are looking at 3 bits, and they are 1 0 1, then the answer to that question is no (there are two 1s, and two is even).  Therefore XOR(1 0 1) = 0 and then XOR(0 0 1) = 1 because there is one 1, and one is odd.

Now here's the really cool thing about XOR.  If you take the output of XOR and store it along with the inputs, you can remove any one of those inputs and the XOR on the remaining inputs along with the output will equal the missing input.  For example XOR(1 0 1) = 0, if we lose the first bit there, and instead have X 0 1, we can just take the XOR of what we have left, plus the output (which was 0) to get XOR(0 1 0) = 1.  I know that is confusing, but as another example: XOR(1 1 1) = 1 (three 1s, three is odd), so we store 1 1 1 1 (the last bit is the parity bit that we got from the XOR), and then if we lose any of those 1s, we can calculate XOR of what remains and we know XOR(1 1 1) = 1.  Another quick example:
XOR(1 0 0) = 1

Store that combined as 1 0 0 1

Then lose one bit: 1 X 0 1

Calculate XOR of what remains: XOR(1 0 1) = 0

Replace the lost bit above with the 0 we just got: 1 X 0 1 -> 1 0 0 1

I made a spreadsheet to show this with full 8 bit bytes.  This is representing 3 drives (d1 through d3), each with 8 bits of data on them (lettered A through H), and then a parity drive (p1) with the calculated XOR of each column.  If you look at any column and count how many 1s there are (excluding the final row, which is parity), then the final row should have a 1 if there were an odd number of 1s and a 0 if there were an even number of 1s.  So in the below image, column A has two 1s, so the parity bit is 0.  Column C has one 1, and so the parity bit is 1.

The first table in the image shows the three data drives and the parity drive all full.  The middle table shows d2 having failed, highlighted in red, now replaced and empty.  The final table shows d2 after it was replaced by calculating the XOR of the other white rows.  In the first and last table the gray row is the result of XORing the 3 white rows.

This scales up to any number of drives, and we always only lose a single drive to parity.  Here we have 7 data drives and only 1 parity drive, and still can lose any drive and then recover.

You may now be asking how RAID 6 works with 2 simultaneous parity drives, where you can lose any 2 drives and then recover.  The answer is the math becomes much more complex, and I can't explain it, but if you want to read through it, this post does a good job of explaining it.

Why don't I like RAID?

We've covered a lot of ground for me to be able to answer this question.  First off there are some practical concerns I haven't really touched on.  RAID is annoying to set up, and requires identically sized drives.  This is fine for a data center that is going to have many servers each running RAID, where the drives only need to match in a single server, but for the home user, whose storage is going to grow organically, it's very annoying to have to replace all your drives every time you want to increase your storage.

The bigger problem with RAID though, is that it's not a backup.  I mentioned this above, but RAID protects you against your drive failing, and nothing else.  It does not protect against software corruption, user error, or your house burning down.  If you delete a file from a RAID server, and then realize that was a mistake, it's too late, it and the redundant copy are gone.  If a file gets corrupted when you save it, the corruption is also instantly copied to the parity.

What about Unraid?

Unraid is a paid operating system, which aims to make RAID much more user friendly.  Despite its name, Unraid really is just RAID, but with a lot of the annoying parts eliminated.  It handles the problem of having a mix of drive sizes, and allows adding new drives to an existing array.

I strongly considered Unraid for my home server.  The main reasons I didn't go with it though, are:

  1. It isn't cheap.  It's either $50/year or $250 for a lifetime license.
  2. It requires starting with empty drives.  While you can add a new blank drive to an existing Unraid array, if you are starting out with full drives you will have to buy enough empty drives to start a new empty array, and then move stuff onto them.  This also means all the configuration I have set up on my server (Home Assistant, Plex, etc) would be lost, or at least would require me to migrate it all over, and deal with the downtime while I did that.
  3. Finally, Unraid is not any more of a backup than RAID is.  As far as parity goes, Unraid is the same as RAID 5, 6, etc depending on how many parity drives you use.  But if you corrupt a file, you still have no backup of it.

Snapraid to the rescue

I discovered something called Snapraid, and instantly knew it was the best choice for me.  Snapraid is an open source command line tool which calculates parity for any number drives and stores it as a single parity file on another drive.

Snapraid has one main catch, which is both a pro and con: It only runs on demand, meaning that when you make changes, they aren't added to the parity until you actually run the snapraid command to recalculate the parity.  This however, means it serves as a sort-of-backup.  Things like corrupted files or mistakenly deleted files can be recovered, as long as you discover the problem before running the parity check.

Snapraid works best where you have a large amount of media that rarely changes.  This is my exact use case, so it works very well for me.

The drawbacks of Snapraid are:

  1. It doesn't combine drives into one filesystem (although it often combined with mergerfs, which does exactly that).
  2. It's a command line tool (although really pretty easy to use).
  3. Updates only happen on demand, which means anything you write will not be protected until you recalculate the parity.
  4. Unintuitively, updating a file will cause other files to be unprotected until you rerun the parity command.  I'll explain this one more later, it's the biggest catch.

The benefits of Snapraid are:

  1. It's a free open source command line tool.  I realize I had command line as a drawback above, but it will be a pro or a con depending on your point of view.
  2. It can be added to an existing system, very easily.  All you need is enough space for the parity file, which means you need one empty drive, which is at least as large as your largest other drive.
  3. You can use any mix of drives, and add and remove them from the array easily.
  4. You can use any number of parity drives, to cover whatever level of risk multiple drive failures you're ok with.  Their FAQ includes a good guideline on how many parity drives you should use.
  5. There is no performance overhead when you write (or read) files.  It's not running at all aside from whenever you schedule it to run.  It only takes a few minutes to run after the first time.
  6. It has features to help protect against random errors that can happen in RAM, and for which people often use ECC memory with RAID servers to protect against.  Namely, it can run the parity calculations twice for each file, and has a "scrub" feature where it can double check the existing parity data for a percentage of your total data.  The scrub will also detect corruption that occurs from a failing HDD.
  7. The fact that it only runs on demand means you can recover from mistakes and corruption as long as you notice before the next run.

I mentioned Snapraid is pretty easy to use.  You set up a .conf file, telling it what your data drives are, and which drive to store parity on.  You can also exclude files or folders and tell it to ignore them, which is a good option for frequently changing data (although obviously consider some other backup strategy for that data).  Once it's set up, you run it with Snapraid sync, and your setup could be as simple as running that via a cronjob every night.

Snapraid AIO Script

There is a nice bash script that makes running Snapraid much easier.  You can read through the readme, but some main features of it are an email report every time it runs, as well as a lot of configuration on when Snapraid should not run.  For example, if it detects a large number of files have changed or been deleted, you can have it email you a warning and not run.  This would give you a chance to recover those files if it turned out they had been deleted by accident.

I'd recommend checking out that script if you do use Snapraid.  From what I can see, the contributors to it are all really cool people.

Biggest gotcha to Snapraid

I mentioned this above, but it's worth reiterating, as it's the biggest potential problem with Snapraid.  If you either delete or modify a file, that file will of course not be protected until you run the snapraid sync command again.  However, other random files, on your other drives will also lose their protection until you run the command again.  To understand why, go up and review those parity diagrams from above.  By modifying a file on drive 1, you are changing the parity calculation for the same spot on drive 2, drive 3, etc.  Modify the bit in column A of d1, and you can no longer recover the bits in column A or d2 or d3.  So, if you delete a 1 MB file on drive 1, you now have a 1 MB hole on drive 2, 3, etc.

There are two ways to mitigate this.  First, don't delete and modify things often.  And when you know things are going to be modified and deleted schedule that to happen shortly before the sync command is scheduled to run (but not so shortly before that it's still running when the sync command starts).  Second, when you need to delete something, you can move it to a snapraid excluded folder.  Then the file still exists, and if a drive fails you can move it back to its prior location and be able to recover everything.  Once you run the sync command snapraid will calculate a new parity, without the excluded files, and you can delete them whenever you want.

Odds and ends

This, predictably, grew to be a quite a large post.  Still there are some random additional thoughts I wanted to include, so here they are.

I've written before how I'm backing up my documents to AWS S3 Glacier.  This is still my strategy for the things I care the most about.  It gives me daily snapshots and is quite cheap. My AWS bill is currently $0.25/month, and I have years of backups I haven't deleted.

I also have a bunch of systems that I want to be able to recover if their root drive fails.  My strategy for these is to mirror some of the folders nightly to a HDD in the server.  I exclude that folder from Snapraid, since they are already a mirror from their primary locations.  I'm using rsync to sync a few important folders from my desktop and the server itself.  I also have a bunch of Pis that I want to be prepared if their SD card fails.  I wrote before about doing that with rsync, but I'm now using this script to make a full image of the entire SD card to a NFS drive.  That image can be mounted and examined.  It works pretty well, although it's not super user friendly.  I also found this script, which might be better, but I haven't tried it.

I discovered the site healthchecks.io which I'm a really big fan of.  It helps you keep track of all your backup scripts to detect if any start to silently fail.  You can create up to 20 checks for free, and for each you get a unique URL, which you then hit as the final step of your backup script and it'll register as a run.  Then you can configure the site with how often those scripts should run and it'll alert you if they miss a check in.  I have a slightly more complex pattern I've been following which pings the site at the start and end of the script, so it can monitor how long they run for, and will be alerted right away if they fail.  You can see an example of that in my backup script here.

Finally, I used the site serverpartdeals.com to buy my additional drives.  They are pretty well regarded around the internet, and they have good prices on recertified drives.  I've never used recertified drives before, but I figure if I'm going to have them protected with parity anyway, I might as well buy more, cheaper, drives and just increase parity if I feel that is too risky.


Thursday, August 15, 2024

Youtube TV Channels

https://ytch.xyz/

Replicates the TV experience with channels of always playing Youtube videos.

Here are the themes of the channels:
Channel 1: Science and Technology
Channel 2: Travel and Events
Channel 3: Food
Channel 4: Architecture
Channel 5: Film and Animation
Channel 6: Documentaries
Channel 7: Comedy
Channel 8: Music
Channel 9: Autos and Vehicles
Channel 10: News and Politics
Channel 11: UFC
Channel 12: Podcasts/Interviews/Talk Shows

Sunday, August 11, 2024

A wonderful coincidence or an expected connection: why Ï€² ≈ g.

https://roitman.io/blog/91

Let's start by taking a close look at the right side. The value 9.81 is in m/s². But these are far from the only units of measurement. If you express this value in any other units, the magic immediately disappears. So, this is no coincidence—let's dig deeper into the meters and seconds.

What exactly is a "meter," and how could it be related to Ï€? At first glance, not at all. According to Wikipedia, a "meter is the distance light travels in a vacuum during a time interval of 1/299,792,458 seconds." Great, now we have seconds involved—good! But there's still nothing about Ï€.

Wait a minute, why exactly 1/299,792,458? Why not, for example, 1/300? Where did this number come from in the first place? It seems we need to delve into the history of the unit of length itself to understand this better.


Thursday, July 11, 2024

Reverse Engineering TicketMaster's Rotating Barcodes (SafeTix)

https://conduition.io/coding/ticketmaster/

These six-digit numbers behave a lot like Time-based One-Time Passwords (TOTPs) - This is what powers 2FA apps like Authy or Google Authenticator. These are rotating 6-digit codes which can be generated from a shared secret and a timestamp.

My instinct was that the first two numbers are indeed TOTPs, generated from different secrets, using the unix timestamp appended at the end of the barcode data. This makes sense: TicketMaster wouldn’t want to reinvent the wheel with this system, so they used a tried and tested cryptographic tool as a building block.

The base64 data was still a mystery. Decoding it into its constituent 48 bytes, it doesn’t seem to contain any meaningful data structures that I could discern. It seems more or less like random data, and since it doesn’t change when the barcode rotates, it’s probably some kind of random bearer token which identifies the ticketholder and their ticket.

When the ticket is scanned at the venue, TicketMaster (or perhaps the venue) looks up the ticket metadata using that bearer token, and then validates the two OTPs against two secrets stored in its database. If both steps pass, then your ticket is valid and the staff can let you in.

Monday, July 1, 2024

Will we ever get fusion power?

https://www.construction-physics.com/p/will-we-ever-get-fusion-power

The second avenue of progress since the 1990s has been on inertial confinement fusion. As discussed earlier, inertial confinement fusion can be achieved by using an explosion or other energy source to greatly compress a lump of nuclear fuel. Inertial confinement is what powers hydrogen bombs, but using it as a power source can be traced back to an early concept for a nuclear power plant proposed by Edward Teller in 1955. Teller proposed filling a huge underground cavern with steam, and then detonating a hydrogen bomb within it to drive the steam through a turbine.

The physicist tasked with investigating Teller’s concept, John Nuckols, was intrigued by the idea, but it seemed impractical. But what if instead of an underground cavern, you used a much smaller cavity just a few feet wide, and detonated a tiny H-bomb within it? Nuckols eventually calculated that with the proper driver to trigger the reaction, a microscopic droplet of deuterium-tritium fuel could be compressed to 100 times the density of lead and reach temperatures of tens of millions of degrees: enough to trigger nuclear fusion.

This seemed to Nuckols to be far more workable, but it required a driver to trigger the reaction: H-bombs used fission-based atom bombs to trigger nuclear fusion, but this wouldn’t be feasible for the tiny explosions Nuckols envisioned. At the time no such driver existed, but one would appear just a few years later, in the form of the laser.


Wednesday, June 19, 2024

Reverse Engineering a Restaurant Pager system

https://k3xec.com/td158/

It’s been a while since I played with something new – been stuck in a bit of a rut with radios recently - working on refining and debugging stuff I mostly understand for the time being. The other day, I was out getting some food and I idly wondered how the restaurant pager system worked. Idle curiosity gave way to the realization that I, in fact, likely had the means and ability to answer this question, so I bought the first set of the most popular looking restaurant pagers I could find on eBay, figuring it’d be a fun multi-week adventure.

Sunday, June 9, 2024

South Pole Water Infrastructure

https://brr.fyi/posts/south-pole-water-infrastructure

For work that takes you away from station, without access to toilet facilities, many personnel also carry portable bottles. These are a formal item, provided by USAP, and marked for their intended use. They are 32oz “HDPE” Nalgene bottles.

You can obtain one at the beginning of your season, and it’s your responsibility to return it, thoroughly cleaned and sanitized, before you depart. These are often used by personnel who travel to deep field locations, but they are also helpful for any situation where you may find yourself away from permanent facilities.