Friday, September 10, 2021

Suspicious discontinuities

This page is a wall of text and some charts, but it shows some really interesting patterns in a wide array of data sets.

Thursday, September 2, 2021

How I DIY smart home sensors

Home Assistant

I've had a draft post about Home Assistant for years, which I haven't posted because I feel like my HA install is perpetually under construction.  If you haven't heard of Home Assistant, it's a LAN based, open source, smart home control center.  Typically installed on a Raspberry Pi, it has an absurd number of integrations with third parties.  While the UI of HA can be polished up to be pretty nice, its real value is in automatons that never require opening the UI.  This isn't a HA post so I'll stop talking about it, but I highly recommend playing around with it if you have any interest in smart home, and are willing to tinker a bit in exchange for not having to use cloud based systems.


While I have a variety of sensors that serve particular purposes, I'm obsessed with collecting basic data for each area of my house.  Particularly, I like to measure the temperature, humidity, and light level on each level on my house (including attic and basement).  I do nothing with this information, but I look at the graphs literally every day, and they bring me great joy.  If you're confused right now, this is probably a good time for you to cut your loses and stop reading this post.  If you're excited for your own graphs, read on.


While you can buy ready made sensors that measure these things, I've always built my own.  They are much cheaper, and you have way more flexibility if you decide you want to add something later.  Originally I started out with Raspberry Pis, and then Rasperrry Pi Zero Ws.  This works, but if you start wanting to do multiples of these, then the overheads of SD cards and high current power supplies start to add up.

I've always considered using an ESP32 (what an Arduino is based on) instead, but they are just more annoying to work with.  While a Raspberry Pi is a full computer, running Linux, an ESP32 is a microcontroller, which means you write a single program (typically in C), compile it and load it up, and when it powers up it runs that program, and does nothing else.



While the hassle of the ESP32 has never been worth the cost savings over the Raspberry Pi for me, this all changed when I discovered a project called ESPHome.  With ESPHome you connect some sensors to an ESP32, create a YAML file with the config, then upload the files to the ESP32, which will then begin sending the data to Home Assistant via an integration.  I couldn't believe how slick the setup was.  You can even update the config file and send the update to the ESP32 via WiFi without having to go to wherever you have it installed and hooking it up to a computer.  With ESPHome, setting up these sensors is easier with ESP32s than my custom Python scripts running on Raspberry Pis.


These links are mostly dead, as these listing change often.  But, I'll provide them, and the price I paid, as a reference for you, visitor from the future.

ESP32 ($6.50 each) - There are a few versions of these out there, and I'm not an expert on the differences, but for our purposes I don't think they matter much.  Just note what pin layout you get, and make sure when you buy more they are the same layout.

BME280 ($24 / 3) - Temperature, humidity, and pressure sensors.  I used to use DHT22 sensors, but BME280s are: 1. more accurate, 2. use the I2C protocol, and 3. include pressure.  I can put two DHT22 sensors next to each other and while it's obvious the graphs follow each other, there is clear variation between them.  With two BME280s the graphs are spot on.  They are also much higher resolution, which makes the graph much smoother.  The I2C protocol allows you to share one pin across many sensors.  Each sensor gets an address and the host will cycle through them measuring each one.  This makes it quite easy to add another (I2C based) sensor later on.

BH1750 ($7 / 3) - Light sensor; also uses I2C.

USB Charger - I won't provide a link and cost for these.  The power requirements for ESP32s is much lower than a Raspberry Pi, under 500 mA vs at least 1000 mA to over 2000 mA for newer Raspberry Pis.  You should be able to use any random USB charger you have available.

Prototype Boards ($12 / 40) - Not required.  Used to hold multiple sensors together and wire everything up.

Headers ($14 / 120) - Used with the prototype board.

Jumper / Dupont wires ($7 / 120) - Used to connect sensors.  You can technically get by without these if you use the prototype boards, or you can use just these instead of the boards.


Full disclosure: This isn't going to be the type of guide you can just follow along step by step and end up with something that works.  You'll have to understand what you're actually doing to get this to work.  This isn't intentional on my part, it's just the best I can do explaining this.  If you've soldered a bit before, and are familiar with wiring things up to Pis and Arduino you should be able to get it to work.

My goal is to build a "shield" from a prototype board, which will sit on top of the ESP32 and allow multiple I2C sensors to be plugged into it.  Since I2C generally use the same pin layout, the front of the board will have multiple header sockets aligned vertically, all connected to the same wires on the back of the board.  Those wires will go to 4 pins on the ESP32.  I use a few inches of solid core Ethernet cable to provide the wires.

This is the pinout of the style ESP32 I use.  There are a few common ones, and you have to find the one that matches yours.  The pins we care about are 3V3, GND, I2C SCL, and I2C SDA.  All the sensors I buy have pins in the order VCC, GND, SCL, and SDA, this is common, but something to check when searching.  It's not required to have the pins in this order, but if all your sensors have pins in the same order you can create a bus on the board where all the pins are wired in parallel to the 4 pins of the ESP32.

The way I wire the boards up is going to be hard to explain, but hopefully these pictures help.

Front of the prototype board

Back of the prototype board

Start on the front of the board with two 4 pin sockets, aligned vertically.  On the back of the board, wires are soldered up to the pins of those sockets, and run vertically down the board.  It's not clear in the picture, but there are exactly 8 solder points on the back of the board, for the two 4 pin sockets.  Each of the pins of the two sockets is directly connected to the same pin on the other socket.

At the bottom of the board, the wires run through the board, back to the front.  Looking at the front of the board again, the wires then run all over the board to where the pins are on the ESP32.  If you're using a different ESP32 pinout, the exact layout of the wires on the front will look different.  There are also 8 solder points on the front of the board.  Four of them are the electrical connections to the four pins of the ESP32. and the other four are just physical connections holding the top and bottom of each 20 pin header in place, those are technically optional.

Your ESP32 will then have two male headers soldered to it (possibly by you), which will then plug into the 20 pin female header on the prototype board.  Four pin male headers on the sensors can then be plugged directly into the four pin female headers on the board.  However, I have found using a short 4 wire jumper cable between the BME280 and boards will help keep the heat from the ESP32 from influencing the temperature reading of the BME280.  You could also just subtract an offset in software if you'd rather a neater look.  It's not pictured, but I use masking tape to cover all the exposed wires to prevent shorts and class it up a bit.


The software step is way easier than the hardware.  First, follow the ESPHome getting started guide.  Then, when you have your YAML file, modify the following example to get it working with the BME280 and BH1750:

  name: rec_room
  platform: ESP32
  board: esp32doit-devkit-v1

  ssid: "MyWifiNetwork"
  password: "Password123"

  # Enable fallback hotspot (captive portal) in case wifi connection fails
    ssid: "Rec Room Fallback Hotspot"
    password: "random_password_abc123"


# Enable logging

# Enable Home Assistant API


  sda: 21
  scl: 22
  scan: True

  - platform: bh1750
    name: "Rec Room Light"
    address: 0x23
    measurement_time: 254
    update_interval: 60s
    unit_of_measurement: lux

  - platform: bme280
      name: "Rec Room Temperature"
      oversampling: 16x
      - lambda: return x * (9.0/5.0) + 32.0;
      unit_of_measurement: "°F"
      name: "Rec Room Pressure"
      name: "Rec Room Humidity"
    address: 0x76
    update_interval: 60s



I'll admit, that when I list this whole process out like this, it's a lot more work than I thought.  It's also not particularly cheap for each sensor package, at just under $20 each.  But it's hard to find sensors the plug in, instead of using batteries.  That could be a pro or a con to you, but I hate batteries.  These are also really rock solid reliability, and provide super clean data.  Plus, it's fun?

Sunday, August 29, 2021


This is an old video, but this is a new HD rip from the 16mm film.  I really recommend watching it if you haven't ever.

Sunday, August 22, 2021

Radio Transmissions From Police Helicopter's Chase Of Bizarre Craft Over Tucson Add To Mystery

On February 9, 2021, a U.S. Customs and Border Protection (CBP) helicopter encountered what was described as a “highly modified drone” hovering in controlled airspace above Tucson, Arizona. A Tucson Police Department (TPD) helicopter was called in to aid the CBP aircraft in its pursuit of the small aircraft, but the drone, or whatever it was, was able to outrun both of them as it flew through military airspace, deftly maneuvered around both helicopters with bizarre agility, and ultimately disappeared into cloud cover above the altitude the helicopters could safely fly. A police report previously obtained by The War Zone showed that the TPD crew described the drone as “very sophisticated / specialized” and “able to perform like no other UAS” they had previously encountered. Now we have the actual audio from the CBP helicopter’s interactions with air traffic controllers in Tucson during the incident, as well as audio from an after-action call between the TPD crew and the air traffic control tower.

Wednesday, August 4, 2021

Where Are The Robotic Bricklayers?

There seems to be a few factors at work. One is the fact that a brick or block isn’t simply set down on a solid surface, but is set on top of a thin layer of mortar, which is a mixture of water, sand, and cementitious material. Mortar has sort of complex physical properties - it’s a non-newtonian fluid, and it’s viscosity increases when it’s moved or shaken. This makes it difficult to apply in a purely mechanical, deterministic way (and also probably makes it difficult for masons to explain what they’re doing - watching them place it you can see lots of complex little motions, and the mortar behaving in sort of strange not-quite-liquid but not-quite-solid ways). And since mortar is a jobsite-mixed material, there will be variation in it’s properties from batch to batch.

Friday, July 23, 2021

Zip - How not to design a file format.

> How do you read a zip file?

> This is undefined by the spec.

> There are 2 obvious ways.

> 1. Scan from the front, when you see an id for a record do the appropriate thing.

> 2. Scan from the back, find the end-of-central-directory-record and then use it to read through the central directory, only looking at things the central directory references.

I was recently bitten by this at work. I got a zip from someone and couldn't find inside the files that were supposed to be there. I asked a colleague, and they sent me a screenshot showing that the files were there, and that they didn't see the set of files that I saw. I listed the content of the zip using the "unzip -l" command. They used the engrampa GUI. At that point I looked at the hexdump of the file. What caught my eye was that I saw the zip magic number near the end of the zip, which was odd. The magic number was also present at the beginning of the file. At this point I suspected that someone used cat(1) to concatenate two zips together. I checked it with dd(1), extracting the sequence of bytes before the second occurrence of the zip magic number and the remainder into two separate files. And sure enough at that point both "unzip -l" and "engrampa" showed the same set of files, and both could show both zips correctly. Turns out engrampa was reading the file forwards, whereas unzip was reading the file backwards. 

Thursday, June 17, 2021

The hair-dryer incident

The Hair Dryer Incident was probably the biggest dispute I’ve seen in the mental hospital where I work. Most of the time all the psychiatrists get along and have pretty much the same opinion about important things, but people were at each other’s throats about the Hair Dryer Incident.

Basically, this one obsessive compulsive woman would drive to work every morning and worry she had left the hair dryer on and it was going to burn down her house. So she’d drive back home to check that the hair dryer was off, then drive back to work, then worry that maybe she hadn’t really checked well enough, then drive back, and so on ten or twenty times a day.

It’s a pretty typical case of obsessive-compulsive disorder, but it was really interfering with her life. She worked some high-powered job – I think a lawyer – and she was constantly late to everything because of this driving back and forth, to the point where her career was in a downspin and she thought she would have to quit and go on disability. She wasn’t able to go out with friends, she wasn’t even able to go to restaurants because she would keep fretting she left the hair dryer on at home and have to rush back. She’d seen countless psychiatrists, psychologists, and counselors, she’d done all sorts of therapy, she’d taken every medication in the book, and none of them had helped.

So she came to my hospital and was seen by a colleague of mine, who told her “Hey, have you thought about just bringing the hair dryer with you?”

And it worked.

She would be driving to work in the morning, and she’d start worrying she’d left the hair dryer on and it was going to burn down her house, and so she’d look at the seat next to her, and there would be the hair dryer, right there. And she only had the one hair dryer, which was now accounted for. So she would let out a sigh of relief and keep driving to work.

And approximately half the psychiatrists at my hospital thought this was absolutely scandalous, and This Is Not How One Treats Obsessive Compulsive Disorder, and what if it got out to the broader psychiatric community that instead of giving all of these high-tech medications and sophisticated therapies we were just telling people to put their hair dryers on the front seat of their car?

But I think the guy deserved a medal. Here’s someone who was totally untreatable by the normal methods, with a debilitating condition, and a drop-dead simple intervention that nobody else had thought of gave her her life back. If one day I open up my own psychiatric practice, I am half-seriously considering using a picture of a hair dryer as the logo, just to let everyone know where I stand on this issue.

Sunday, May 23, 2021

Teardown of a PC power supply

You might wonder how the controller chip on the primary side receives feedback about the voltage levels on the secondary side, since there is no electrical connection between the two sides. (In the photo above, you can see the wide gap separating the two sides.) The trick is a clever chip called the opto-isolator. Internally, one side of the chip contains an infra-red LED. The other side of the chip contains a light-sensitive photo-transistor. The feedback signal on the secondary side is sent into the LED, and the signal is detected by the photo-transistor on the primary side. Thus, the opto-isolator provides a bridge between the secondary side and the primary side, communicating by light instead of electricity.

Monday, May 17, 2021

Try This One Weird Trick Russian Hackers Hate

DarkSide and other Russian-language affiliate moneymaking programs have long barred their criminal associates from installing malicious software on computers in a host of Eastern European countries, including Ukraine and Russia. This prohibition dates back to the earliest days of organized cybercrime, and it is intended to minimize scrutiny and interference from local authorities.

In Russia, for example, authorities there generally will not initiate a cybercrime investigation against one of their own unless a company or individual within the country’s borders files an official complaint as a victim. Ensuring that no affiliates can produce victims in their own countries is the easiest way for these criminals to stay off the radar of domestic law enforcement agencies.


DarkSide, like a great many other malware strains, has a hard-coded do-not-install list of countries which are the principal members of the Commonwealth of Independent States (CIS) — former Soviet satellites that all currently have favorable relations with the Kremlin, including Azerbaijan, Belarus, Georgia, Romania, Turkmenistan, Ukraine and Uzbekistan. The full exclusion list in DarkSide (published by Cybereason) is below:

Simply put, countless malware strains will check for the presence of one of these languages on the system, and if they’re detected the malware will exit and fail to install.

Thursday, December 17, 2020

Gaia’s stellar motion for the next 1.6 million years

The stars are constantly moving across the sky. Known as proper motion, this motion is imperceptible to the unaided eye but is being measured with increasing precision by Gaia. This animation shows the proper motions of 40 000 stars, all located within 100 parsecs (326 light years) of the Solar System. The animation begins with the stars in their current positions; the brightness of each dot representing the brightness of the star it represents.

As the animation begins, the trails grow, showing how the stars will change position over the next 80,000 years. Short trails indicate that the star is moving more slowly across the sky, whereas long trails indicate faster motion. To avoid the animation becoming too difficult to interpret, the oldest parts of the trails are erased to only show the newer parts of the stellar motions into the future.

Sometimes it appears as if a star is accelerating (as indicated by a longer trail). This is due to the star getting closer to us. Proper motion is a measure of angular velocity, which means that close-by stars appear to move more quicker across the sky even when their speed is the same as that of other, more distant stars.

Towards the end of the animation, the stars appear to congregate on the right side of the image, leaving the left side emptier. This is an artefact and is caused by the average motion of the Solar System with respect to the surrounding stars.

The animation ends by showing star trails for 400 thousand years into the future.

Saturday, December 12, 2020

Cameras and Lenses

Over the course of this article we’ll build a simple camera from first principles. Our first steps will be very modest – we’ll simply try to take any picture. To do that we need to have a sensor capable of detecting and measuring light that shines onto it.

Saturday, December 5, 2020

Your Smart TV is probably ignoring your PiHole

Smart devices manufacturers often “hard-code” in a public DNS server, like Google’s, and their devices ignore whatever DNS server is assigned by your router - such as your PiHole.

Nearly 70% of smart TVs and 46% of game consoles were found to contain hardcoded DNS settings - allowing them to simply ignore your local network’s DNS server entirely. On average, Smart TVs generate an average of 60 megabytes of outgoing Internet traffic per day, all the while bypassing tools like PiHole.

Thursday, December 3, 2020

AI Generated Music

We started with the original SampleRNN research code in theano. It's a hierarchical LSTM network. LSTMs can be trained to generate sequences. Sequences of whatever. Could be text. Could be weather. We train it on the raw acoustic waveforms of metal albums. As it listens, it tries to guess the next fraction of a millisecond. It plays this game millions of times over a few days. After training, we ask it to come up with its own music, similar to how a weather forecast machine can be asked to invent centuries of seemingly plausible weather patterns.

It hallucinates 10 hours of music this way. That's way too much. So we built another tool to explore and curate it. We find the bits we like and arrange them into an album for human consumption.

It's a challenge to train nets. There's all these hyperparameters to try. How big is it? What's the learning rate? How many tiers of the hierarchy? Which gradient descent optimizer? How does it sample from the distribution? If you get it wrong, it sounds like white noise, silence, or barely anything. It's like brewing beer. How much yeast? How much sugar? You set the parameters early on, and you don't know if it's going to taste good until way later.

We trained 100s of nets until we found good hyperparameters and we published it for the world to use.

Monday, November 30, 2020

DeepMind Solved Protein Folding

We have been stuck on this one problem – how do proteins fold up – for nearly 50 years. To see DeepMind produce a solution for this, having worked personally on this problem for so long and after so many stops and starts, wondering if we’d ever get there, is a very special moment.


You're welcome

Thursday, October 8, 2020

Reverse engineering my cable modem and turning it into an SDR

This is the type of nerdy hacking that makes me jealous.

After removing a few screws from the plastic housing to get access to the board, my first thought was to look for UART headers to take a peek at the serial console. After identifying two candidates consisting of four vias surrounded by a rectangle near the edge of the PCB, it was time to identify the pins. Using a multimeter, the ground pin can be easily identified by checking the continuity with one of the metal shields on board. The VCC pin can be identified by measuring the voltage of each pin when powering on the board. It should be a steady 3.3v, or in some cases 1.8v or 5v. This pin is not needed, but is still useful to identify the operating voltage and eliminate one candidate for the Tx and Rx pins. While booting, the Tx pin will sit on average a little lower than the VCC pin and drop much lower when a lot of data is being output. This leaves the last pin as Rx.

Tuesday, October 6, 2020

The economics of vending machines

It is estimated that roughly ⅓ of the world’s ~15m vending machines are located in the US.

Of these 5m US-based vending machines, ~2m are currently in operation, collectively bringing in $7.4B in annual revenue for those who own them. This means that the average American adult spends ~$35 per year on vending machine items.

What makes the vending industry truly unique is its stratification: The landscape is composed of thousands of small-time independent operators — and no single entity owns >5% of the market.


Thursday, October 1, 2020

Test if your email is letting the sender know when you view an email

There are a ton of ways companies can track if you view an email.  This site tests which of these methods work even if you are blocking images for example:

You have to click the link in the first email, then click "test this email" for a second email that actually runs the test by the way.  I was confused at first why it wasn't doing anything.

Tuesday, September 29, 2020

Wednesday, August 26, 2020

Walk with me though the hilariously inconsistent on-screen titles of Star Trek's two-part episodes.

 I couldn't resist the pedantry of this post.

"The Best of Both Worlds"
"The Best of Both Worlds" Part II
Okay, here we go. This is TNG's first actual two-parter. Note now the "Part II" is placed outside the quotes, adopting the style from TOS before it. The difference, other than dropping the "Part I" from part one, is that we’re not using ALL CAPS anymore, so we learn that “Part” is meant to be rendered in title case, with the “P” capitalized. A boring fact that you'll soon learn is the only constant in the universe.

"Redemption II"
Okay, another season-ending cliffhanger resolved! But... now we're just naming them like heavy metal albums, I guess. The only actual established rule for TNG so far is that "part one" does not get a roman numeral…


Thursday, July 9, 2020

A Graphical Analysis of Women's Tops Sold on Goodwill's Website

I set up a script that collected information on listings for more than four million women's shirts for sale through Goodwill's website, going back to mid-2014. The information is deeply flawed—a Goodwill online auction is very different from a Goodwill store—but we can get an idea of how thrift store offerings have changed through the years. There's more info on data collection method below.

Wednesday, July 1, 2020

Using AWS S3 Glacier Deep Archive For Personal Backups

I've been using AWS S3 for personal backups, and it's working well.  The hardest part of doing anything in AWS is that you have no idea what it will cost until you actually do it; they are masters of nickle and dime charging.  With that in mind, I wanted to wait until I had a few months of solid data before reporting on how it's been working for me.

If you know me, this may surprise you, but my backup strategy is a bit complex.  However, the relevant part for this post is that my documents folder is about 16 GB and I'm keeping a full backup of that, with daily diffs, for about $0.02 a month.


I did a post estimating the costs last year, and the result has lined up with that.

Here is the relevant part of my AWS bill for May 2020 (June looks to be the same, but isn't complete yet):

There are also some regular S3 line items, since I believe the file list is stored there even when the files are in Deep Archive.  However, I'm far below the cost thresholds there.


I have a local documents folder on my SSD, that gets backed up to a network version nightly via an rsync script.  Folders that are no longer being updated (eg, my school folder) I will delete from my local version and just keep on the network version.

Every month I create a full zip of my local documents folder and upload to S3.  Then every day I create a zip of just the files that have changed in the last 40 days.  I chose 40 days to to provide some overlap.  You could be more clever and just get files that changed since the first of the month, but I wanted to keep the process simple due to how important it is.  I also do a yearly backup of the full network version of this folder, which has a lot of stuff that hasn't changed in years in it.

The result is that I could do a full recovery by pulling the most recent monthly backup and then the most recent daily backup, and replacing the files in the monthly with the newer versions from the daily.  I'd also have to pull the most recent yearly, and extract that to a separate location.

This feels like a pretty simple recovery, all things considered.


The full backup:

And the diff backup:

If you want to adapt these scripts it should be pretty straightforward.  You'll have to have 7zip installed and have the command line aws client set up.  Create a nice long random password and store it in the password file.  Make sure you have a system for retrieving that password if you lose everything.

There's a feature to warn if the compressed file is larger than expected, since that will cost money.  The numbers are arbitrary, and work for me, you'd have to adjust them.  Also if you want to get the emailed warnings you'll have to set up mail and change the email address.

If you do want to use S3 Deep Archive for backups I really recommend reading my previous post, because there are a lot of caveats.  I highly encourage you to combine your files into a single file, because that will reduce the per file costs dramatically.

Also, note there is nothing here to delete these backups.  If all you care about is being able to restore the current version, then you can delete any but the newest version.  Keeping them all gives you the ability to restore at any point in time.  If you do delete them, keep in mind there is a limit to how fast you can delete things on Deep Archive.


I realize there are easier, free-er, and arguable better solutions out there for personal backups.  That's it, I don't have a 'but,'.  If you're reading this blog, this should not be a surprise.  Now that I have real data, I'm thinking about backing up some of my harder to find media here too.  I estimate 1 TB should cost about $12 per year in any of the cheapest regions.

Saturday, April 4, 2020

Stateless Password Managers

An idea I've had for a while is a password generator where you take a master password, an optional per site password, and the site domain name, combine and hash them to get a unique password for any site.

This system has a unique benefit over traditional password managers in that you can't lose your passwords.  Even if all your electronics were destroyed and you woke up naked in China tomorrow you could get your passwords just by using an online version of the tool (or failing that, manually doing the steps yourself with a hash generator).

However, the system has a unique drawback of not remembering what the password requirements are.  Some sites require special characters, some don't allow them, some require more than 10 characters, some allow for a max of 8.  It would be easy to translate your hash into whatever set of requirements you have, but you still need to either remember that, or store it somewhere else.

Today I discovered this idea has been implemented, a lot.  It's called a stateless password manager, or a deterministic password manager.  Two examples are:

And here is an article discussing the flaws in this system:

Tuesday, March 24, 2020

Social Distancing Scoreboard

According to the World Health Organization and the CDC, social distancing is currently the most effective way to slow the spread of COVID-19. We created this interactive Scoreboard, updated daily, to empower organizations to measure and understand the efficacy of social distancing initiatives at the local level.

Sunday, March 15, 2020

How do laser distance measures work?

I recently bought a laser tape measure; it's pretty great.  One button to turn it on, then it gives you instant distance measurements to wherever you point the laser.  There are more expensive ones that do further distances, but the one I got was $30 and goes up to 65 feet.  I compared it to a normal tape measure and it was accurate and repeatable to an eighth of an inch.  I was pretty impressed with it, and it was a great toy to add to my collection of measuring devices.

However, I began to wonder how it worked, especially since it worked so well, and was so cheap.

How laser distance measures don't work

In principle it would be simple.  Light has a very well known speed, so all you have to do is measure how long it takes for the light to go out and reflect back.  Distance = speed x time.  You could encode a binary number in the laser, just a counter incrementing and resetting when it runs out of numbers.  Measure what number is being reflected back and how long ago you sent that number out and you know how long it took to come back.

However, the devil is in the details, and getting that time precise enough to measure an 1/8th of an inch is going to be hard.

An 1/8th of an inch is 3.175 mm.  The speed of light is 299,792,458 m/s.  Or 299,792,458,000 mm/s.  3.175 mm / 299,792,458,000 mm/s = 1.059066002254133e-11 seconds.  Which is about 10.59 picoseconds.  Take the inverse of that and it's 94.42 Gigahertz.  I'm going to go out on a limb and assume that the $30 laser tape measure I have in my pocket doesn't have a 100 GHz clock inside of it.

How do they actually work?

Instead of transmitting a counter, just send an alternating pulse.  It doesn't have to be very fast, a MHz would be enough.  Then your reflected pulse is the same wave, but delayed slightly.  You only care about measuring the difference in time of the leading and falling edges of the two waves, or delta.  This means you can just compare the two waves using an XOR gate, which is just a fancy way of saying "tell me whenever these waves are different".

Here's an example

Where the top red line is the original signal, and the second blue line is the reflected version.  Then the third green line is the XORed delta of the two.

When you measure something slightly further away the reflected wave gets more delayed and the delta version gets a longer pulse.

Are logic gates fast enough? 

Logic gates like these are cheaper and faster than the circuitry you'd need for a timer.  However, they still aren't quite fast enough for the precision we see in these tools.  Luckily though, a delay doesn't really impact the measurement.  As long as it's a consistent delay on both the rising and falling edges of the two waves.

All you end up with is a slightly offset delta signal.

Who will measure the measurer?

It might seem like we're back to square one here, with the need to precisely measure the time of that pulse, but we actually just need take the average of that signal.  There are a variety of ways we can do this, but as a proof of concept, imagine the delta signal is charging a capacitor, which is simultaneously being drained by a constant resistor.  You'd end up with a level of charge in the capacitor which would translate into what percentage of time the delta single is high.

Now, all you have to do is measure the charge in the capacitor and turn that into a measurement you display.  Let's review what we need:
  • Laser transmitter and optical sensor.
  • MHz clock to turn laser on and off.
  • XOR circuit to compare the two transmitted and received signals.
  • A capacitor and resistor circuit to find average of the digital signal.
  • A way to measure the charge in the capacitor.
  • Something to take that measurement and convert it into the distance.
  • A display.
None of this is very expensive.  I'm pretty amazed they can combine them for less than $30, but at that point, you'd be losing money not to buy one.

Saturday, February 29, 2020

Guessing Smart Phone PINs by Monitoring the Accelerometer
In controlled settings, our prediction model can on average classify the PIN entered 43% of the time and pattern 73% of the time within 5 attempts when selecting from a test set of 50 PINs and 50 patterns. In uncontrolled settings, while users are walking, our model can still classify 20% of the PINs and 40% of the patterns within 5 attempts.

Tuesday, December 31, 2019

Predictions for the decade, from 2010

This is a good look back at what people thought the 2010s would bring at the start of them.

Wednesday, October 30, 2019

A comparision of AWS S3 Glacier Deep Archive region pricing

I'm considering using S3 for personal backups.  They recently introduced a new tier of storage called "S3 Glacier Deep Archive" which is intended for storing files that you will likely never, or perhaps once need to read.  Every geographic region AWS offers storage in has its own pricing.  I couldn't find a nice table with all the prices compared so I found the price to store 1 TB for 1 year in each region:

Using their tool:

If you're considering this keep in mind there are some important caveats.  First you pay for each request, which means if you're storing 1,000,000 files you will pay $50 just for the requests.  Doesn't matter if each file is 1 MB, or 1 KB, or even 1 byte each, it's $0.50 per 1000 PUT requests.  You will then also pay storage fees every month on top of that.  As far as I can tell, you don't pay for the bandwidth to upload the files.

Retrieving the files has more caveats.  First you need to pick a speed, standard or bulk.  Standard takes up to 12 hours, and bulk is up to 48 hours.  Standard also costs about 10x as much as bulk.  And here you pay for the individual requests, the data retrieved, and (I believe) bandwidth to download from S3.

So if you're storing many smallish files (documents) you're probably much better off combing them all into a single zip file, to reduce the number of requests you have to do.  On the other hand if you're storing large files (videos), you'd probably be better off leaving them on their own so that ideally you just need to recover one or two, and then don't have to pay for the bandwidth to download them all.

I made this table to compare some scenarios.  The first 3 rows shows the costs to retrieve 1 TB split across either 1, 1024, or1048576 file.  The less file scenarios are cheaper, but not by a ton, and keep in mind if you only needed a few of those files it'd be much cheaper to just grab those individual files if they weren't zipped together.

The bottom 2 rows shows the cost to get 1 GB of files, either as 1 file or 1024 files.  Here the cost is negligible, pretty much however you store and access it.

So it seems in any case the bandwidth is the biggest cost.  Still, since you generally only pay for bandwidth out of S3 and not in to it, you should never really have to pay this, unless you're recovering from a pretty major disaster.  There is also the option to use AWS Snowball, where they will mail you a physical drive which you keep for up to 10 days then mail back.  That works out to be $200 + $0.03 per GB vs just $0.09 per GB for bandwidth.  So you need to be transferring 10s of TBs before it makes sense.

Wednesday, August 14, 2019

Build a computer out of NAND gates in stages.  This is essentially a game version of my post about how computers work.

Sunday, July 7, 2019

Social Science Research Network

 I've been into reading random papers from SSRN lately.  There's some really good stuff on there, like the paper I mentioned in my last post.

Sunday, June 30, 2019

The law of small numbers

I was listening to a podcast when I heard about an interesting probability result in the same vein as the Monty Hall Problem.  The new problem is this: Flip a coin 100 times and record the results.  Now pick random flips in the set and see if the next 3 flips are all heads; if so we call this a streak.  Repeat until you find a streak of 3.  Now what is the probability that the 4th flip is also heads?  Is it 50% like we would expect?  It turns out to be closer to 46%, which is not very far from 50%, but is also a clear trend.

You can download the paper here, and I recommend you read through the introduction, which is pretty easy to follow.  I think does a good job of explaining what is going on.  Since no one will do that, here is a table from the paper which helps give some intuition.

This represents every possible outcome from flipping a coin 3 times and looking for a 'streak' of 1 heads.  There are eight total possible outcomes, all equally likely.   In the first two, the streak of 1 heads never happens, or happens on the last flip where there is no following flip to look at.  Those are thrown away and ignored.  In the other six possible outcomes we do get a streak, at least once, and earlier than the last flip.  The underlined flips represent the possible candidates for the flip that is following a streak.  If we pick the preceding streak, then the underlined flips will be the one we are trying to predict.  In three out of the six outcomes with a streak, the following flip will not be heads.  In two out of the six outcomes the following flip will always be heads.  And in the remaining possible outcome it could be either head or tails with 50/50 probability depending on which streak you pick.

If you list out all the possible outcomes from any combination of streak length and total flips, you can see that some number of the heads flips are 'consumed' by the streaks themselves.  Those flips can never be following a streak, because they are part of the streak needed to define the streak.  On the other hand, the tails have no restrictions, they are all available to occur in the flip immediately following a streak.  There are simply more tails available to go in the candidate position.  The effect gets smaller as you decrease the streak length or increase the total number of flips in a set.

I found this very surprising, so I wanted to test it out.  I wrote a Ruby script to simulate various coin flips and look for streaks of different lengths, and output the results.  I then decided to rewrite it in a compiled language so it would be faster.  I decided to try out Go, as I've never used it before and I was hoping for something with a bit more syntactic sugar than C.

Here are the results of a bunch of combinations of streak lengths and numbers of flips from the Go program:
Looking for a streak of length  1 in    10 total flips. Performed 10000 rounds, and   9973 were successful, found 45.29% continued the streak.
Looking for a streak of length  1 in   100 total flips. Performed 10000 rounds, and  10000 were successful, found 49.43% continued the streak.
Looking for a streak of length  1 in  1000 total flips. Performed 10000 rounds, and  10000 were successful, found 49.91% continued the streak.
Looking for a streak of length  2 in    10 total flips. Performed 10000 rounds, and   8203 were successful, found 38.16% continued the streak.
Looking for a streak of length  2 in   100 total flips. Performed 10000 rounds, and  10000 were successful, found 47.72% continued the streak.
Looking for a streak of length  2 in  1000 total flips. Performed 10000 rounds, and  10000 were successful, found 50.15% continued the streak.
Looking for a streak of length  3 in    10 total flips. Performed 10000 rounds, and   4797 were successful, found 34.88% continued the streak.
Looking for a streak of length  3 in   100 total flips. Performed 10000 rounds, and   9995 were successful, found 45.84% continued the streak.
Looking for a streak of length  3 in  1000 total flips. Performed 10000 rounds, and  10000 were successful, found 49.78% continued the streak.
Looking for a streak of length  4 in    10 total flips. Performed 10000 rounds, and   2152 were successful, found 35.83% continued the streak.
Looking for a streak of length  4 in   100 total flips. Performed 10000 rounds, and   9637 were successful, found 40.61% continued the streak.
Looking for a streak of length  4 in  1000 total flips. Performed 10000 rounds, and  10000 were successful, found 49.21% continued the streak.
Looking for a streak of length  5 in    10 total flips. Performed 10000 rounds, and    985 were successful, found 37.36% continued the streak.
Looking for a streak of length  5 in   100 total flips. Performed 10000 rounds, and   7860 were successful, found 38.66% continued the streak.
Looking for a streak of length  5 in  1000 total flips. Performed 10000 rounds, and  10000 were successful, found 48.91% continued the streak.
Looking for a streak of length  6 in    10 total flips. Performed 10000 rounds, and    388 were successful, found 35.82% continued the streak.
Looking for a streak of length  6 in   100 total flips. Performed 10000 rounds, and   5190 were successful, found 35.24% continued the streak.
Looking for a streak of length  6 in  1000 total flips. Performed 10000 rounds, and   9996 were successful, found 46.68% continued the streak.
Looking for a streak of length  7 in    10 total flips. Performed 10000 rounds, and    140 were successful, found 40.71% continued the streak.
Looking for a streak of length  7 in   100 total flips. Performed 10000 rounds, and   2997 were successful, found 33.83% continued the streak.
Looking for a streak of length  7 in  1000 total flips. Performed 10000 rounds, and   9761 were successful, found 42.40% continued the streak.
Looking for a streak of length  8 in    10 total flips. Performed 10000 rounds, and     52 were successful, found 36.54% continued the streak.
Looking for a streak of length  8 in   100 total flips. Performed 10000 rounds, and   1634 were successful, found 33.60% continued the streak.
Looking for a streak of length  8 in  1000 total flips. Performed 10000 rounds, and   8365 were successful, found 38.27% continued the streak.
Looking for a streak of length  9 in    10 total flips. Performed 10000 rounds, and     17 were successful, found 47.06% continued the streak.
Looking for a streak of length  9 in   100 total flips. Performed 10000 rounds, and    784 were successful, found 33.04% continued the streak.
Looking for a streak of length  9 in  1000 total flips. Performed 10000 rounds, and   6037 were successful, found 35.80% continued the streak.
Looking for a streak of length 10 in    10 total flips. Performed 10000 rounds, and      0 were successful, found NaN% continued the streak.
Looking for a streak of length 10 in   100 total flips. Performed 10000 rounds, and    381 were successful, found 30.71% continued the streak.
Looking for a streak of length 10 in  1000 total flips. Performed 10000 rounds, and   3615 were successful, found 33.91% continued the streak.

Tuesday, April 30, 2019

Should You Time The Market?
You have 2 investment strategies to choose from.
  1. Dollar-cost averaging (DCA):  You invest $100 (inflation-adjusted) every month for all 40 years.
  2. Buy the Dip: You save $100 (inflation-adjusted) each month and only buy when the market is in a dip.  A “dip” is defined as anytime when the market is not at an all-time high.  But, I am going to make this second strategy even better.  Not only will you buy the dip, but I am going to make you omniscient (i.e. “God”) about when you buy.  You will know exactly when the market is at the absolute bottom between any two all-time highs.  This will ensure that when you do buy the dip, it is always at the lowest possible price.

Making a DIY smartwatch

Friday, March 15, 2019

Everything Smarthome

This is a long, but enjoyable article in broken Russian-English about everything smarthome in 2019.