Rechargeable batteries are a pretty complicated subject. Not that I have any particular need for rechargeable batteries, but I found a very in depth resource for rechargeable battery info. So far I've learned that storing NiMH and Li-ion at 100% or completely discharging them isn't good for them. It's also possible that using the slowest possible charging speed isn't best, and that leaving them in the charger for months on end on trickle isn't good either. I have 20+ tabs open so I should know the answer in a few short hours.
This forum is the internet's resident authority on batteries. This post is a long list of links that deal with various specific subjects.
http://www.candlepowerforums.com/vb/showthread.php?t=217683
This blog exists purely as a place for me to dump random links and thoughts I have rather than emailing them to my friends. It'll have large amounts of inside jokes. Also there will probably be times when I write "you" or refer to an email. Just pretend that you are reading an email to you. If you don't know me you likely won't find anything here interesting. If you do know me you also will not find anything here interesting.
Thursday, December 30, 2010
Wednesday, December 29, 2010
Storing Passwords Securely
Background:
Recently, Mozilla has admitted to accidentally leaking password info for 44,000 addons.mozilla.org accounts. A few weeks ago, Gawker had accidentally leaked password info for a million+ accounts. These incidents have spawned a lot of discussion about what was done wrong and what changes should have been implemented. I wrote something about picking good passwords before, so I'll just address how to securely store passwords.
I won't attempt to address specifics for how a website should store its password database, or how they can prevent leaks. That is a matter of internal policy, and isn't relevant to what I want to discuss here. I'll begin with the assumption that passwords are stored in a database and that this database will be leaked at some point. The question is what practices should be put into place to mitigate damage when that leak happens?
Solution:
The answer is actually quite simple, and easy to implement. When the user creates a password (with no practical limits on length or characters) this password should be combined with a per-user unique salt, and then a cryptographically secure hash (e.g.SHA-2bcrypt) should be taken and stored with whatever token the system uses to ID users (e.g. username). This process can be repeated (i.e. feeding the hash result back into the hash function) many times if one wishes to increase computational time needed for an attack. There are many ready-made functions to do just this, in any language a site is likely to be using for its backend database work. Thus, implementing this process is simple. Indeed, it's likely that using a ready-made and secure function is going to be easier than reinventing the wheel and coming up with a custom in-house function that ignores decades of cryptographic research.
Explanation:
So what does all that above mean? Why not just store the passwords in a database directly? After all, if the database leaks at all, it is likely to be a very bad thing. Shouldn't the effort be made in ensuring the database leak doesn't occur in the first place? Certainly leaks should be prevented, but security is all about assuming compromises and then creating a system that is robust enough to survive them. Many independent layers should work together to provide overall security. In many discussions of secure communications there is the assumption that all communications are intercepted. There are methods to have secure conversations without eavesdropping, but it is still wise to operate from a premise that you can't foresee and avoid every attack. By making your system secure even if it is intercepted you provide your communications with an extra layer of security. For the same reason, it is wise to store passwords securely in a database, even while working to prevent that database from leaking.
Encryption?
If you're now in agreement that passwords should be stored securely in a database (which itself should in-turn be stored securely), your next question should be what exactly does the process outlined above mean, and how does it actually provide security? When a password is stored directly in a database, so that it may be retrieved by the system, it is said to be stored in plaintext. The answer is not to simply encrypt the passwords, at least not in the traditional sense. The problem is that the master key used to encrypt the passwords then has to be stored in plaintext. If the database is leaked, it is safe to assume any key used to encrypt entries in it will also be leaked.
You may have noticed that many websites, when you forget your password, will reset your password to something random and email it to you, as opposed to emailing you the original password you chose. The reason is that these sites don't know your password at all. Indeed, a surefire sign of poor security is if, when you recover a password, the site sends the original as opposed to a new random password. Even with a metaphorical gun to its head, a secure website could never reveal your password, simply because it doesn't know it. This appears to lead to a paradox, as clearly the website is able to authenticate users. How can a website know that the password a user enters is correct without itself knowing the password? The answer is one way functions, and specifically cryptographic hash functions.
Hashes:
Hash functions are rather simple concepts. One can provide any data, without a lower or upper limit on size, and the hash will output a short hash value of a constant size. Some key properties of cryptographic hash functions are:
Constant Output Size - The hash always returns a value of a certain size, regardless of input size. For example, the MD5 hash of 'Monkey' is '4126964b770eb1e2f3ac01ff7f4fd942'; the 591 MB slackware-13.0-install-d1.iso file has an MD5 hash of '6ef533fcae494af16b3ddda7f1c3c6b7'. Both are 128 bit numbers expressed as 32 hex values. This is a key fact. It also shows why limits on password size are silly, and usually a good sign the site is using some insecure method of storing passwords. No matter how long the user makes their password the size of the value that needs to be stored is the same.
One Way - The next thing is that they aren't reversible. Given a hash there is no way to figure out what data created that hash, short of brute-force trying of every possible combination of data until the same hash is found. Even then, if the input data is random there is no way of knowing for sure that the correct input has been found. It should be obvious that if 591 MB can be reduced to a 16 byte number, then clearly there must be multiple inputs that give the same output. While the number of outputs possible with 128 bits is quite large, it is considerable smaller than the infinite number of inputs possible with any arbitrary length.
Collision Resistant - When two different inputs create the same output it is known as a collision. As seen above, collisions are unavoidable. However, in order for a hash function to be considered cryptographically secure they must be unpredictable. In other words, given an output hash value, if there is any method for finding a collision that is easier than just trying possibilities until one is found, then the hash is considered flawed. How much easier it is to find collisions than a brute-force will determine how seriously flawed the hash is. Attacks on the common MD5 hash have found collisions with complexity of 291. This has the practical effect of reducing the security of the MD5 hash from 128 bit to 91 bit.
Avalanche Effect - Another very important factor for cryptographic hash functions is that very small changes must lead to very large changes in the hash. As I stated above, the MD5 hash for 'Monkey' is '4126964b770eb1e2f3ac01ff7f4fd942'. On the other hand, the MD5 hash for (going from capital 'M' to lower case 'm') 'monkey' is 'd0763edaa9d9bd2a9516280e9044d885'. Notice that there is a single byte change, and that made a dramatic change in the output. Given the three hashes calculated here, and told the three inputs they were derived from, but not which matched with which, it would be impossible to determine which two had nearly the same input. There are just under five billion bits in the above mentioned Linux iso. Flipping just a single arbitrary bit from 0 to 1 or vice versa would result in totally different output. I'm too lazy to demonstrate, but suffice it to imagine a random 128 bit number.
In the parlance of cryptography this is called the avalanche effect. It is the reason hashes are often seen when downloading a file. By checking of hash of the file you receive to the one the website lists you can be sure there were no transmission errors.
Benefits of Hashes:
Now that we've covered hashes, let's review what benefits using them provides. When the user sets their password the system finds the hash of that password and stores that. Since hashes are fast to compute and output a short number there is little overhead in using them. When the user attempts to log in they enter their password and the system computes the hash of it, compares that hash to the one on record and if they match the user is given access. When an attacker gains access to the database all they have is the hashes of the passwords. As we learned above there is no way to reverse the hash to find the passwords. Since the system takes the password input from the user and calculates the hash itself, this means that even with the hashes an attacker can't gain access. If he submits the hash the system will calculate the hash of that, which will not match the hash on file. The attacker must find the input that gave the hash, and the only way to do that is a brute-force. In addition, this provides some security to users that use the same password in multiple places. Since the attacker still doesn't know the actual password, he can't use it to gain access anywhere.
If you were following the Gawker leak, you know that lists of the most common passwords were spread online. Gawker did not store the passwords in plaintext, they only stored the hashes. How then, were the plaintext versions found? The answer shows why simply using hashes (as Gawker did) isn't enough. In addition to only storing the hash one must add salt.
Salts:
Before explaining salt, first I'll answer how the Gawker passwords were found from their hashes. There are a handful of cryptographically secure hashes in widespread use. MD5 is probably the most common, but there are also the newer SHA-1 and SHA-2 (aka SHA-256/512). Since there are only a handful of hashes used by everyone a new attack arose. Attackers can take a dictionary of common password, or even a compressive list of all possible combinations within certain specifications (e.g. all alphanumeric combinations from 1 to 8 characters) and precompute the hash for all of them. This process may take a long time; however, once it's done the results can be used over and over again and even shared. The resultant database of possible passwords and their hashes is known as a rainbow table, and is indeed, very common. An attacker with a rainbow table and a database from a compromised web site can quickly find all the passwords whose hashes have been precomputed in his rainbow table.
The answer to this attack is to salt the passwords before storing them. By adding some data to each password before the hash is calculated one ensures that the outputted hashes won't be in any premade rainbow tables. This data is known as the salt. While simply adding a random piece of data to every password one can ensure that any rainbow table must be specifically computed for this one specific website, an even better practice is to use a piece of data that is unique to every user. For example, a username, email, or UID would be ideal choices to add to the password as a salt. Even though those pieces of info aren't secret they will make premade rainbow tables useless. In addition even a custom rainbow table for an entire web site, which could be worth the effort, would be useless. The only options an attacker has are to either brute-force each password individually, or attempt to precompute rainbow tables containing password+salt combinations.
In a well known example, early (1970s) Unix implementations used an only 2 byte salt. This created 4096 possibilities for each password. It therefore, increased the rainbow table size needed to cover any set password space by a factor 4096. While in the 70s this was adequate, by the 2000s storing the full rainbow table with salts became practical, and any *nix versions still using the old password system had to be updated.
Salts must be stored in plaintext, since the system needs to use them to find the hashes. As such, they provide no extra security to a user using a weak password from a brute-force standpoint. However, if large enough, they all but eliminate the possibility of rainbow table attacks. Since there is little downside to using longer salts, a modern secure system might use something like this: MD5(user_password + UID + 256_bit_pseudorandom_salt) with the 256 bit salt being a random number created by the system for each user when they create a password and stored, in plaintext, in the database along with their UID and hash of password.
With the combination of hashes and long salts a website ensures that even if their database is leaked the only way for an attacker to get access to passwords is with a brute-force attack. Which, if the user uses a sufficiently long password, should be prohibitively time consuming, to the point of being impossible.
Additional Security:
Additional difficulty in brute-forcing can be created by recursively running the hash through itself. If a hash takes one millionth of a second to run on a modern system, then running the hash a million times over and over for each password will increase the time taken to one second. While providing extra security, this only scales linearly as opposed to exponentially (as is the case with increasing hash or salt size). Since it creates an identical slowdown for both the attacker and the website this is viewed as a limited layer of security.
Example:
To review, a proper modern system might look like this:
Addendum:
In the time since writing this I've been swayed on the effectiveness of massive GPU parallelization in generating hashes fast enough to brute force passwords with salts. This doesn't change the overview of password storage and why salts are needed, but it does change my recommendation to use SHA-512 and my aversion to recursively using the hash to slow down the process. A thousand dollars worth of GPUs can generate 250 million SHA-512 hashes per second (or 20 billion MD5 hashes). Modern hashes intended for password storage are designed to be harder to massively parallelize.
With that in mind, I'll revise my advice. Instead of SHA-512, one should use bcrypt (or similar) with the number of repetitions set high enough so that hashing takes at least 0.1 seconds on modern hardware.
Also, I was well aware that MD5 was hopelessly broken when I wrote this, but I didn't make that clear at all in the post.
Recently, Mozilla has admitted to accidentally leaking password info for 44,000 addons.mozilla.org accounts. A few weeks ago, Gawker had accidentally leaked password info for a million+ accounts. These incidents have spawned a lot of discussion about what was done wrong and what changes should have been implemented. I wrote something about picking good passwords before, so I'll just address how to securely store passwords.
I won't attempt to address specifics for how a website should store its password database, or how they can prevent leaks. That is a matter of internal policy, and isn't relevant to what I want to discuss here. I'll begin with the assumption that passwords are stored in a database and that this database will be leaked at some point. The question is what practices should be put into place to mitigate damage when that leak happens?
Solution:
The answer is actually quite simple, and easy to implement. When the user creates a password (with no practical limits on length or characters) this password should be combined with a per-user unique salt, and then a cryptographically secure hash (e.g.
Explanation:
So what does all that above mean? Why not just store the passwords in a database directly? After all, if the database leaks at all, it is likely to be a very bad thing. Shouldn't the effort be made in ensuring the database leak doesn't occur in the first place? Certainly leaks should be prevented, but security is all about assuming compromises and then creating a system that is robust enough to survive them. Many independent layers should work together to provide overall security. In many discussions of secure communications there is the assumption that all communications are intercepted. There are methods to have secure conversations without eavesdropping, but it is still wise to operate from a premise that you can't foresee and avoid every attack. By making your system secure even if it is intercepted you provide your communications with an extra layer of security. For the same reason, it is wise to store passwords securely in a database, even while working to prevent that database from leaking.
Encryption?
If you're now in agreement that passwords should be stored securely in a database (which itself should in-turn be stored securely), your next question should be what exactly does the process outlined above mean, and how does it actually provide security? When a password is stored directly in a database, so that it may be retrieved by the system, it is said to be stored in plaintext. The answer is not to simply encrypt the passwords, at least not in the traditional sense. The problem is that the master key used to encrypt the passwords then has to be stored in plaintext. If the database is leaked, it is safe to assume any key used to encrypt entries in it will also be leaked.
You may have noticed that many websites, when you forget your password, will reset your password to something random and email it to you, as opposed to emailing you the original password you chose. The reason is that these sites don't know your password at all. Indeed, a surefire sign of poor security is if, when you recover a password, the site sends the original as opposed to a new random password. Even with a metaphorical gun to its head, a secure website could never reveal your password, simply because it doesn't know it. This appears to lead to a paradox, as clearly the website is able to authenticate users. How can a website know that the password a user enters is correct without itself knowing the password? The answer is one way functions, and specifically cryptographic hash functions.
Hashes:
Hash functions are rather simple concepts. One can provide any data, without a lower or upper limit on size, and the hash will output a short hash value of a constant size. Some key properties of cryptographic hash functions are:
Constant Output Size - The hash always returns a value of a certain size, regardless of input size. For example, the MD5 hash of 'Monkey' is '4126964b770eb1e2f3ac01ff7f4fd942'; the 591 MB slackware-13.0-install-d1.iso file has an MD5 hash of '6ef533fcae494af16b3ddda7f1c3c6b7'. Both are 128 bit numbers expressed as 32 hex values. This is a key fact. It also shows why limits on password size are silly, and usually a good sign the site is using some insecure method of storing passwords. No matter how long the user makes their password the size of the value that needs to be stored is the same.
One Way - The next thing is that they aren't reversible. Given a hash there is no way to figure out what data created that hash, short of brute-force trying of every possible combination of data until the same hash is found. Even then, if the input data is random there is no way of knowing for sure that the correct input has been found. It should be obvious that if 591 MB can be reduced to a 16 byte number, then clearly there must be multiple inputs that give the same output. While the number of outputs possible with 128 bits is quite large, it is considerable smaller than the infinite number of inputs possible with any arbitrary length.
Collision Resistant - When two different inputs create the same output it is known as a collision. As seen above, collisions are unavoidable. However, in order for a hash function to be considered cryptographically secure they must be unpredictable. In other words, given an output hash value, if there is any method for finding a collision that is easier than just trying possibilities until one is found, then the hash is considered flawed. How much easier it is to find collisions than a brute-force will determine how seriously flawed the hash is. Attacks on the common MD5 hash have found collisions with complexity of 291. This has the practical effect of reducing the security of the MD5 hash from 128 bit to 91 bit.
Avalanche Effect - Another very important factor for cryptographic hash functions is that very small changes must lead to very large changes in the hash. As I stated above, the MD5 hash for 'Monkey' is '4126964b770eb1e2f3ac01ff7f4fd942'. On the other hand, the MD5 hash for (going from capital 'M' to lower case 'm') 'monkey' is 'd0763edaa9d9bd2a9516280e9044d885'. Notice that there is a single byte change, and that made a dramatic change in the output. Given the three hashes calculated here, and told the three inputs they were derived from, but not which matched with which, it would be impossible to determine which two had nearly the same input. There are just under five billion bits in the above mentioned Linux iso. Flipping just a single arbitrary bit from 0 to 1 or vice versa would result in totally different output. I'm too lazy to demonstrate, but suffice it to imagine a random 128 bit number.
In the parlance of cryptography this is called the avalanche effect. It is the reason hashes are often seen when downloading a file. By checking of hash of the file you receive to the one the website lists you can be sure there were no transmission errors.
Benefits of Hashes:
Now that we've covered hashes, let's review what benefits using them provides. When the user sets their password the system finds the hash of that password and stores that. Since hashes are fast to compute and output a short number there is little overhead in using them. When the user attempts to log in they enter their password and the system computes the hash of it, compares that hash to the one on record and if they match the user is given access. When an attacker gains access to the database all they have is the hashes of the passwords. As we learned above there is no way to reverse the hash to find the passwords. Since the system takes the password input from the user and calculates the hash itself, this means that even with the hashes an attacker can't gain access. If he submits the hash the system will calculate the hash of that, which will not match the hash on file. The attacker must find the input that gave the hash, and the only way to do that is a brute-force. In addition, this provides some security to users that use the same password in multiple places. Since the attacker still doesn't know the actual password, he can't use it to gain access anywhere.
If you were following the Gawker leak, you know that lists of the most common passwords were spread online. Gawker did not store the passwords in plaintext, they only stored the hashes. How then, were the plaintext versions found? The answer shows why simply using hashes (as Gawker did) isn't enough. In addition to only storing the hash one must add salt.
Salts:
Before explaining salt, first I'll answer how the Gawker passwords were found from their hashes. There are a handful of cryptographically secure hashes in widespread use. MD5 is probably the most common, but there are also the newer SHA-1 and SHA-2 (aka SHA-256/512). Since there are only a handful of hashes used by everyone a new attack arose. Attackers can take a dictionary of common password, or even a compressive list of all possible combinations within certain specifications (e.g. all alphanumeric combinations from 1 to 8 characters) and precompute the hash for all of them. This process may take a long time; however, once it's done the results can be used over and over again and even shared. The resultant database of possible passwords and their hashes is known as a rainbow table, and is indeed, very common. An attacker with a rainbow table and a database from a compromised web site can quickly find all the passwords whose hashes have been precomputed in his rainbow table.
The answer to this attack is to salt the passwords before storing them. By adding some data to each password before the hash is calculated one ensures that the outputted hashes won't be in any premade rainbow tables. This data is known as the salt. While simply adding a random piece of data to every password one can ensure that any rainbow table must be specifically computed for this one specific website, an even better practice is to use a piece of data that is unique to every user. For example, a username, email, or UID would be ideal choices to add to the password as a salt. Even though those pieces of info aren't secret they will make premade rainbow tables useless. In addition even a custom rainbow table for an entire web site, which could be worth the effort, would be useless. The only options an attacker has are to either brute-force each password individually, or attempt to precompute rainbow tables containing password+salt combinations.
In a well known example, early (1970s) Unix implementations used an only 2 byte salt. This created 4096 possibilities for each password. It therefore, increased the rainbow table size needed to cover any set password space by a factor 4096. While in the 70s this was adequate, by the 2000s storing the full rainbow table with salts became practical, and any *nix versions still using the old password system had to be updated.
Salts must be stored in plaintext, since the system needs to use them to find the hashes. As such, they provide no extra security to a user using a weak password from a brute-force standpoint. However, if large enough, they all but eliminate the possibility of rainbow table attacks. Since there is little downside to using longer salts, a modern secure system might use something like this: MD5(user_password + UID + 256_bit_pseudorandom_salt) with the 256 bit salt being a random number created by the system for each user when they create a password and stored, in plaintext, in the database along with their UID and hash of password.
With the combination of hashes and long salts a website ensures that even if their database is leaked the only way for an attacker to get access to passwords is with a brute-force attack. Which, if the user uses a sufficiently long password, should be prohibitively time consuming, to the point of being impossible.
Additional Security:
Additional difficulty in brute-forcing can be created by recursively running the hash through itself. If a hash takes one millionth of a second to run on a modern system, then running the hash a million times over and over for each password will increase the time taken to one second. While providing extra security, this only scales linearly as opposed to exponentially (as is the case with increasing hash or salt size). Since it creates an identical slowdown for both the attacker and the website this is viewed as a limited layer of security.
Example:
To review, a proper modern system might look like this:
$userpass; //the user's inputted password
$uid; //unique id for each user used as database key
$salt; //unique and random 256 bit salt created for each user
$hash; //the hash of the user's pass + salt stored in the database
//user registration:
input $userpass; //get password from user
input $uid; //generate $uid or get from user
$salt = ran(0, 2^256); //create a random 256 bit salt
$hash = bcrypt($userpass + $salt); //find bcrypt hash of password+salt
//store $uid, $salt, and $hash in database, don't store $userpass
//user login:
input $userpass; //get password from user
input $uid; //get user's uid
//find $salt and $hash from database using $uid as a key
if $hash == bcrypt($userpass + $salt)
//if hashes match access granted
//if hashes don't match access denied
Addendum:
In the time since writing this I've been swayed on the effectiveness of massive GPU parallelization in generating hashes fast enough to brute force passwords with salts. This doesn't change the overview of password storage and why salts are needed, but it does change my recommendation to use SHA-512 and my aversion to recursively using the hash to slow down the process. A thousand dollars worth of GPUs can generate 250 million SHA-512 hashes per second (or 20 billion MD5 hashes). Modern hashes intended for password storage are designed to be harder to massively parallelize.
With that in mind, I'll revise my advice. Instead of SHA-512, one should use bcrypt (or similar) with the number of repetitions set high enough so that hashing takes at least 0.1 seconds on modern hardware.
Also, I was well aware that MD5 was hopelessly broken when I wrote this, but I didn't make that clear at all in the post.
Tuesday, December 28, 2010
How Ma Bell Shelved the Future for 60 Years
http://io9.com/5699159/how-ma-bell-shelved-the-future-for-60-years
Let's return to Hickman's magnetic tape and the answering machine. What's interesting is that Hickman's invention in the 1930s would not be " discovered" until the 1990s. For soon after Hickman had demonstrated his invention, AT&T ordered the Labs to cease all research into magnetic storage, and Hickman's research was suppressed and concealed for more than sixty years, coming to light only when the historian Mark Clark came across Hickman's laboratory notebook in the Bell archives.
But why would company management bury such an important and commercially valuable discovery? What were they afraid of? The answer, rather surreal, is evident in the corporate memoranda, also unearthed by Clark, imposing the research ban. AT&T firmly believed that the answering machine, and its magnetic tapes, would lead the public to abandon the telephone.
Thursday, December 23, 2010
A Brilliant Idea
So, xkcd has a book out and I was reading Amazon reviews when I found one that is the funniest thing I've read on this Internet thing in possibly hours. Some may argue that linking to Amazon reviews is pretty silly. However, others may argue that since Amazon reviews are done for neither critical acclaim nor monetary award, it is the purest form of art. Discuss.
This humor may be a bit specific:
This humor may be a bit specific:
Who will like XKCD?- KAZ Vorpal, aka Michael Karl
* Problem-solvers who, upon realizing that the a-hole ops of #Linux answer every question with "RTFM", spoof another IP while running a hacked ID validator and give wrong answers to their own questions, baiting the experts into jumping in with corrections.
Wednesday, December 22, 2010
Monday, December 20, 2010
Julian Assange's Rape Charge
One of the better articles I've seen describing the unusal circumstances of Julian Assange's rape charges.
http://washingtonexaminer.com/news/world/2010/12/assange-rape-case-spotlights-swedens-liberal-laws
http://washingtonexaminer.com/news/world/2010/12/assange-rape-case-spotlights-swedens-liberal-laws
Friday, December 17, 2010
Hashcash
http://en.wikipedia.org/wiki/Hashcash
Hashcash is a method of adding a textual stamp to the header of an email to prove the sender has expended a modest amount of CPU time calculating the stamp prior to sending the email. In other words, as the sender has taken a certain amount of time to generate the stamp and send the email, it is unlikely that they are a spammer. The receiver can, at negligible computational cost, verify that the stamp is valid. However, the only known way to find a header with the necessary properties is brute force, trying random values until the answer is found; though testing an individual string is easy, if satisfactory answers are rare enough it will require a substantial number of tries to find the answer.
The theory is that spammers, whose business model relies on their ability to send large numbers of emails with very little cost per message, cannot afford this investment into each individual piece of spam they send. Receivers can verify whether a sender made such an investment and use the results to help filter email.
Tuesday, December 14, 2010
The Tiger Oil Memos
http://www.lettersofnote.com/2010/08/tiger-oil-memos.html
Memos from a cranky old boss to his employees:
Memos from a cranky old boss to his employees:
There is one thing that differentiates me from my employees. I am a known son-of-a-bitch, and I care to remain that way. I have the privilege of swearing publicly, in front of anyone, or doing anything I want to because I pay the bills. When you work for me, you don't have that privilege.
Anyone who lets their hair grow below their ears to where I can't see their ears means they don't wash. If they don't wash, they stink, and if they stink, I don't want the son-of-a-bitch around me.
Do not speak to me when you see me. If I want to to speak to you, I will do so. I want to save my throat. I don't want to ruin it by saying hello to all of you sons-of-bitches.,
there will be no more birthday celebrations, birthday cakes, levity, or celebrations of any kind within the office. This is a business office.
Internet = Cat Pictures
http://www.buzzfeed.com/expresident/109-cats-in-sweaters
Remember before the internet how infrequently you saw large collections of pictures of cats in mildly amusing situations? How did we get by?
Remember before the internet how infrequently you saw large collections of pictures of cats in mildly amusing situations? How did we get by?
Thursday, December 9, 2010
Profile of Ron Paul
http://www.theatlantic.com/magazine/archive/2010/11/the-tea-party-8217-s-brain/8280/
In Congress, Paul usually stands alone. This is a natural consequence of voting against Mother Teresa and the countless other bills on seemingly unobjectionable matters to which only he has objected. For much of his career, his own party routinely blocked him. His notoriety peaked three years ago during a presidential-primary debate in South Carolina when, alone among the 10 candidates, Paul, an isolationist, questioned the U.S. presence in the Middle East and seemed to suggest that it had prompted the September 11 attacks. Rudy Giuliani immediately demanded he withdraw the statement (he refused), and afterward Paul tussled with Sean Hannity of Fox News, which derided him mercilessly for the rest of the campaign. When Republicans convened in Minneapolis to nominate John McCain, Paul was so far out of favor that he and his supporters held their own convention across town.
Look Out Ben Bernanke, Ron Paul Is Gunning for You!
http://www.theatlantic.com/politics/archive/2010/12/look-out-ben-bernanke-ron-paul-is-gunning-for-you/67779/
Last week, I wrote a column about what I think is one of the more intriguing storylines for the new Congress: the possibility that Rep. Ron Paul (R-Texas)--strident libertarian, devotee of Austrian economics, author of End the Fed--might finally assume the chairmanship of the House subcommittee that oversees the Fed. Twice before he's been denied this spot because the GOP leadership worried he was too much of a rabble-rouser and too independent-minded to control. But given the broad animus toward the Fed--a new Bloomberg poll shows that half of all Americans want it reined in or abolished--especially among the ascendant Tea Party wing of the Republican caucus, denying Paul for a third time would have provoked an uproar.
Well, word has just come down from the Financial Service Committee that Paul got the job.
Wednesday, December 8, 2010
ACLU sues N.J. for keeping Hunterdon salt barn plans confidential
http://www.nj.com/news/local/index.ssf/2010/12/aclu_sues_state_for_keeping_hu.html
The American Civil Liberties Union has filed a lawsuit against the state after it refused to release the construction plans for a barn used to store road salt, on the basis that doing so would be a security risk.
Abandoned Pennsylvania Turnpike
"The Abandoned Pennsylvania Turnpike is the common name of a 13 mile (21 km) stretch of the Pennsylvania Turnpike that was bypassed in 1968 when a modern stretch opened to ease traffic congestion. The reasoning behind the bypass was to reduce traffic congestion at the tunnels. In this case, the Sideling Hill Tunnel and Rays Hill Tunnel were bypassed, as was one of the Turnpike's travel plazas. The bypass is located just east of the heavily congested Breezewood interchange at what is now exit 161."
The Abandoned PA Turnpike is a stretch of highway that was bypassed and is no longer used. A few years ago it was sold to Southern Alleghenies Conservancy.
http://picasaweb.google.com/floor9/AbandonedPennsylvaniaTurnpike#
http://maps.google.com/maps/ms?ie=UTF8&hl=en&msa=0&msid=101818717000588153352.000434f7d70d35ae7a687&t=h&z=12
http://www.briantroutman.com/highways/abandonedpaturnpike/index.html
http://forgottenpa.blogspot.com/2007/09/ten-lonesome-miles-abandoned-pa.html
http://en.wikipedia.org/wiki/Abandoned_Pennsylvania_Turnpike
http://briantroutman.com/highways/abandonedpaturnpike/trip.html
Monday, December 6, 2010
Barack Obama gives way to Republicans over Bush tax cuts
http://www.guardian.co.uk/world/2010/dec/06/barack-obama-bush-tax-cuts
But leading Democrats say the president is backing down and has agreed to extend tax cuts for everyone. In return, the White House appears to have extracted an agreement to extend benefits for the long-term unemployed."I know we're in debt so we'll only agree to reduced income if you agree to increase spending. Agreed." - US Government to itself.
Sunday, December 5, 2010
Should Service Learning Be Mandatory For College Students?
Another Essay for my English class. This time, in response to the question:
Should service learning be a requirement for college graduation?
Service learning is the name for forcing college students to do volunteer work as part of their college careers. The hope is that this volunteer work will give students a better sense of civic duty, and thus, be a worthy addition to college curriculums. However, this idea relies on the faulty premise that if one is forced to volunteer that one will derive the same benefits as someone who does it out of their own desire to help. Mandatory service learning will not have the desired effect, and should not be forced upon students.
It is perhaps intuitive to think that by making students help others there will be a net positive; there could be no downside to volunteering time and effort to help the community. However, a more detailed inspection reveals there are many negatives, and any positive effects are just wishful thinking.
To begin with, service learning wouldn’t benefit the students’ education. Indeed, many students would be unable to volunteer in their field. This negates any argument that service learning would help the students’ education. While there may be specific cases where a student with a practical major could benefit from volunteering their efforts, this would simply be a positive indirect effect. Not only that, but in many cases such students are already effectively volunteering their time in the form of unpaid internships. If schools wish students to volunteer in such a manner they should be working with charities to establish more voluntary internships. However, as soon as students are forced to volunteer for the sake of volunteering, it no is longer about helping the student.
One has to ask: why it is exclusively schools that would take up this forced volunteer work? If it was really a needed benefit to society, then companies could also be forced to volunteer for the community. Indeed, in many cases a company would be better equipped to help in whatever way was needed. However, to a company their time is money. If a company is forced to give its time and resources to volunteer, it might as well give money instead. That money, in the form of taxes, is already paid by both companies and individuals to the government. That money, in turn, should be used to help communities when needed. If communities need extra help, the answer is increasing taxes to provide that extra help. Forcing college students to provide that help directly ignores the efficiencies gained from specialization. A college student, who isn’t even studying whatever field is needed, would be able to help more by working in their actual field for money, and then giving that money to a specialist, via the government, to provide the direct help. The only problem with that arrangement is that it doesn’t provide the positive feelings that directly helping would. However, gaining a positive sense of accomplishment at the expense of providing less efficient help is a purely selfish motivation.
In the case of labor being needed that almost anyone could do, there is an even greater gain in having the public at large finance the work instead of doing it directly. Instead of having a college student learn about basic construction, which he will likely never use again in his career, wouldn’t it make more sense to pay someone local to the community to do the work instead? That way, not only is the work done, but someone learns a trade which may provide them with a future source of income. In this way, a community can be helped to help itself. This is always preferable to relying on the altruism of others. Not only will it give the community a greater sense of pride, it will also make them less at the mercy of a fickle public. When the fad of service learning passes, and the aid stops, the community will have gained valuable skills which they will be able to continue to use to their own benefit.
Again, the only argument for directly doing the work is the positive feelings it would invoke in those doing the work. Proponents of service learning call these positive feelings a sense of civic duty. They argue that the feelings do have a worth in and of themselves. However, all the sense of civic duty in the world won’t help someone who doesn’t understand the very complex problems of the world. As John Egger eloquently points out in "Service 'Learning' Reduces Learning":
In "Local Students Serve As They Learn" Robert Caret argues that “Students discover how political, economical, and social influences affect people and programs in real communities” (1). Bringle and Hatcher echo these sentiments in "Implementing Service Learning in Higher Education": “Emphasizing a value of community involvement and voluntary community service can also create a culture of service on campus” (1). However, both of these papers refer to students who voluntarily worked to help their communities. This ignores what will happen when someone is forced to volunteer against their will. When volunteering ceases to be voluntary, it loses even the sense of civic duty that was its only positive factor. Instead of feeling good about helping the community, students who are forced to do work will associate negative thoughts with volunteering in general. This will end up having a net negative effect as far as future volunteering is concerned.
While service learning may seem beneficial upon first glance, a more thorough inspection reveals the truth that there is very little in the way of real benefits accruing to either the student or the community. In the cases where a student is able to provide a specific service in his or her field an internship with a charity would be the better tool to facilitate that. In the case of students who are providing general help outside their fields, government sponsored work done by the communities themselves would both be more efficient, and provide a greater total benefit to the community. The only justification for having the students do the work themselves is a sense of civic duty. Unfortunately, by forcing the students to do the work, any positive sense of civic duty will be offset by negative emotions from being forced. A better way to gain the desired sense of civic duty is through additional education that addresses the problems and their causes. In the end, the idea of mandatory service learning doesn’t make sense.
Works Cited:
Bringle, Robet G. and Julie A. Hatcher. “Implementing Service Learning in Higher Educations” (Excerpt). Journal of Higher Education 67.2 (1996): 221-223. Print.
Caret, Robert L. “Local Students Serve as They Learn.” Examiner.com. The Examiner. 20 September 2007. Web. 9 Sept. 2008.
Egger, John B. “service 'Learning' Reduced Learning.” Examiner.com. The Examiner, 2 October 2007. Web. 9 Sept. 2008.
Should service learning be a requirement for college graduation?
Service learning is the name for forcing college students to do volunteer work as part of their college careers. The hope is that this volunteer work will give students a better sense of civic duty, and thus, be a worthy addition to college curriculums. However, this idea relies on the faulty premise that if one is forced to volunteer that one will derive the same benefits as someone who does it out of their own desire to help. Mandatory service learning will not have the desired effect, and should not be forced upon students.
It is perhaps intuitive to think that by making students help others there will be a net positive; there could be no downside to volunteering time and effort to help the community. However, a more detailed inspection reveals there are many negatives, and any positive effects are just wishful thinking.
To begin with, service learning wouldn’t benefit the students’ education. Indeed, many students would be unable to volunteer in their field. This negates any argument that service learning would help the students’ education. While there may be specific cases where a student with a practical major could benefit from volunteering their efforts, this would simply be a positive indirect effect. Not only that, but in many cases such students are already effectively volunteering their time in the form of unpaid internships. If schools wish students to volunteer in such a manner they should be working with charities to establish more voluntary internships. However, as soon as students are forced to volunteer for the sake of volunteering, it no is longer about helping the student.
One has to ask: why it is exclusively schools that would take up this forced volunteer work? If it was really a needed benefit to society, then companies could also be forced to volunteer for the community. Indeed, in many cases a company would be better equipped to help in whatever way was needed. However, to a company their time is money. If a company is forced to give its time and resources to volunteer, it might as well give money instead. That money, in the form of taxes, is already paid by both companies and individuals to the government. That money, in turn, should be used to help communities when needed. If communities need extra help, the answer is increasing taxes to provide that extra help. Forcing college students to provide that help directly ignores the efficiencies gained from specialization. A college student, who isn’t even studying whatever field is needed, would be able to help more by working in their actual field for money, and then giving that money to a specialist, via the government, to provide the direct help. The only problem with that arrangement is that it doesn’t provide the positive feelings that directly helping would. However, gaining a positive sense of accomplishment at the expense of providing less efficient help is a purely selfish motivation.
In the case of labor being needed that almost anyone could do, there is an even greater gain in having the public at large finance the work instead of doing it directly. Instead of having a college student learn about basic construction, which he will likely never use again in his career, wouldn’t it make more sense to pay someone local to the community to do the work instead? That way, not only is the work done, but someone learns a trade which may provide them with a future source of income. In this way, a community can be helped to help itself. This is always preferable to relying on the altruism of others. Not only will it give the community a greater sense of pride, it will also make them less at the mercy of a fickle public. When the fad of service learning passes, and the aid stops, the community will have gained valuable skills which they will be able to continue to use to their own benefit.
Again, the only argument for directly doing the work is the positive feelings it would invoke in those doing the work. Proponents of service learning call these positive feelings a sense of civic duty. They argue that the feelings do have a worth in and of themselves. However, all the sense of civic duty in the world won’t help someone who doesn’t understand the very complex problems of the world. As John Egger eloquently points out in "Service 'Learning' Reduces Learning":
a student’s hours in a soup kitchen is simply charitable contribution and emotional experience. But her feelings provide no clue to the amelioration of poverty. Would a higher minimum wage help ... or hurt? How about tax laws that permit the expensing of capital investments? Or reform of business licensing regulations? In the hours the real student is not tied up in the soup kitchen, she can analyze these policy changes. That’s what education is for. (1)This is a very important point, for a student’s time is not limitless. Indeed, any time spent doing service learning is time in which they aren’t learning how to actually fix the problems they see. No solutions to the world’s problems will be found if people are unable to agree on the causes of those problems, and as John Egger points out “A liberal education offers subjects including art, history and chemistry to promote the individual’s understanding of human nature and therefore his ability to cooperate with others in society” (1). Only through a very thorough education will the true causes of very complex problems become clear. By compelling students to divert time and energy away from studies, colleges would be impairing these students’ ability to actually bring about a positive change. That would result in a large detriment to all, much greater than any short-term gain given directly by the students.
In "Local Students Serve As They Learn" Robert Caret argues that “Students discover how political, economical, and social influences affect people and programs in real communities” (1). Bringle and Hatcher echo these sentiments in "Implementing Service Learning in Higher Education": “Emphasizing a value of community involvement and voluntary community service can also create a culture of service on campus” (1). However, both of these papers refer to students who voluntarily worked to help their communities. This ignores what will happen when someone is forced to volunteer against their will. When volunteering ceases to be voluntary, it loses even the sense of civic duty that was its only positive factor. Instead of feeling good about helping the community, students who are forced to do work will associate negative thoughts with volunteering in general. This will end up having a net negative effect as far as future volunteering is concerned.
While service learning may seem beneficial upon first glance, a more thorough inspection reveals the truth that there is very little in the way of real benefits accruing to either the student or the community. In the cases where a student is able to provide a specific service in his or her field an internship with a charity would be the better tool to facilitate that. In the case of students who are providing general help outside their fields, government sponsored work done by the communities themselves would both be more efficient, and provide a greater total benefit to the community. The only justification for having the students do the work themselves is a sense of civic duty. Unfortunately, by forcing the students to do the work, any positive sense of civic duty will be offset by negative emotions from being forced. A better way to gain the desired sense of civic duty is through additional education that addresses the problems and their causes. In the end, the idea of mandatory service learning doesn’t make sense.
Works Cited:
Bringle, Robet G. and Julie A. Hatcher. “Implementing Service Learning in Higher Educations” (Excerpt). Journal of Higher Education 67.2 (1996): 221-223. Print.
Caret, Robert L. “Local Students Serve as They Learn.” Examiner.com. The Examiner. 20 September 2007. Web. 9 Sept. 2008.
Egger, John B. “service 'Learning' Reduced Learning.” Examiner.com. The Examiner, 2 October 2007. Web. 9 Sept. 2008.
Friday, December 3, 2010
Close the Washington Monument
http://www.schneier.com/blog/archives/2010/12/close_the_washi.html
Securing the Washington Monument from terrorism has turned out to be a surprisingly difficult job. The concrete fence around the building protects it from attacking vehicles, but there's no visually appealing way to house the airport-level security mechanisms the National Park Service has decided are a must for visitors. It is considering several options, but I think we should close the monument entirely. Let it stand, empty and inaccessible, as a monument to our fears.
Thursday, December 2, 2010
How the CIA Used a Fake Sci-Fi Flick to Rescue Americans from Tehran
http://www.wired.com/wired/archive/15.05/feat_cia.html
Mendez had spent 14 years in the CIA's Office of Technical Service — the part of the spy shop known for trying to plant explosives in Fidel's cigars and wiring cats with microphones for eavesdropping. His specialty was using "identity transformation" to get people out of sticky situations. He'd once transformed a black CIA officer and an Asian diplomat into Caucasian businessmen — using masks that made them ringers for Victor Mature and Rex Harrison — so they could arrange a meeting in the capital of Laos, a country under strict martial law. When a Russian engineer needed to deliver film canisters with extraordinarily sensitive details about the new super-MiG jet, Mendez helped his CIA handlers throw off their KGB tails by outfitting them with a "jack-in-the-box." An officer would wait for a moment of confusion to sneak out of a car. As soon as he did, a spring-loaded mannequin would pop up to give the impression that he was still sitting in the passenger seat. Mendez had helped hundreds of friendly assets escape danger undetected.
For the operation in Tehran, his strategy was straightforward: The Americans would take on false identities, walk right out through Mehrabad Airport, and board a plane. Of course, for this plan to work, someone would have to sneak into Iran, connect with the escapees, equip them with their false identities, and lead them to safety past the increasingly treacherous Iranian security apparatus. And that someone was him.